id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
210311314
pes2o/s2orc
v3-fos-license
Experimental evaluation of soil petrophysical attributes: Implications for sustainable agriculture Agriculture is man’s major supplier of his needs, particularly his primary need which is food. Soil is a major component for sustainable agriculture production needs to be studied and understood. Soil’s characteristics determine the type of crop that would grow and the nature of the yield of the crop. The area of study is Covenant University farmland, where twenty soil samples from the farm were collected and petrophysical parameters such as conductivity and salinity were analysed on each soil sample. Introduction Soil is an important component of agriculture. The reason why the human civilization has thrived over centuries is because of the ability of man to use the resources around him to sustain his livelihood using different means among which agriculture is one of them. Agriculture does not only proide man with food, but also provides man with other resources that have made his environment conducive and comfortable. It has provided man with raw materials for production other materials useful to him. Agriculture has also served as income provider for man both of the individual and community scale. With numerous benefits of agriculture, there is need for the practice to be sustained. Agriculture as a science requires the systematic study of all factors that affect its sustainability. These factors can be categorized into two which are economic factors and environmental factors. The environmental factors include climate, soil type, soil texture, soil salinity, soil conductivity, moisture content and topography among others. Soil conductivity, soil salinity and soil water holding capacity and soil hydraulic characteristics are significant in the design and operation of irrigation agriculture system. The level of salt content in soil affect crop yield, as the high level of salt content might reduce or hinder crop yield. Salt tolerance refers to the relative capacity of plants to grow or thrive when subjected to saline soils. Excess accumulation of salt in the soils moves into the plant's transpiration stream from plant roots, thereby causing damage to the plant cells and facilitate further reduction in growth. Examples of physical signs of high salt salinity of soil in crops are stunted growth, brown or yellow leaves and dying leaves. Most soil minerals like feldspar, quartz, clay minerals, iron oxide, are not conductors of electricity. However, most soils exhibits some level of electrical conductivity depending on the amount of dissolved salts present in the water within the pores. Sources of salinity in the soil include rainfall, build-up of salts over a period of time (due to geogenic processes) which are environmental or human induced methods such as irrigation, poor drainage, alteration of plant cultivation methods. The degree of soil salinity in an area has varying effects on the biotic components in the biosphere. The degree of soil salinity in West Africa has increased over time as a result of irrigated agricultural practices in farmlands [1]. Several researches have been conducted to proffer solutions soil contamination characterization, environmental contamination groundwater, agriculture (salinity) and soil geoengineering problems using geophysical GIS and remote sensing techniques [2][3][4][5][6][7][8][9][10][11]. The focus of this research is to assess the level of salt concentration in the soil of a farm within the Covenant University, Nigeria using electrical conductivity method. . Study Area The area of study is a farmland within the Covenant University campus, Sango-Ota, Ogun state, southwestern Nigeria ( Figure 1). Figure 2 shows that the area lies geologically within the Eastern section of the Dahomey basin with east-westward trend sediments deposition and six lithostratigraphic units comprising Benin, Ilaro, Oshosun, Akinbo, Ewekoro and Abeokuta Formations from youngest to the oldest geological formation. Abeokuta Formation denoted as Cretaceous has been classified as a Group divided into Ise, Afowo and Araromi Formations. Abeokuta Formation is of sequence of poorly sorted grits and pebbly sands with intercallations of siltstones, mudstones and shaly clay. Ewekoro Formation is known to be a Paleocene shallow marine deposit of non-crystalline and non-fossiliferous limestone strata. Akinbo shale units are of late Paleocene to Early Eocene overlaid by Eocene Shale of Oshosun Formation. Coarse sequences of estuarine, deltaic and continental sandy unit of Ilaro Formation overlie Oshosun Formation. Coastal plain sands and Tertiary alluvium deposits of Benin Formation is the youngest overlying the Ilaro Formation. Samples Collections and Preparation Twenty (20) soil samples were obtained within the farmland for laboratory analysis. The samples were sieved with the intention of removing pebbles and other irrelevant materials that could cause error in the final output of the analysis. The samples were prepared by drying them properly in order to ensure that they are air tight. Approximately 5g of 20 dissimilar soil samples were weighed and placed in 20 beakers which were correctly labelled. 100 ml of distilled water was filled into each beaker sample with the mixture of the distilled water and soil samples mixed with spatula. The beakers filled with the solutions were kept away and covered with aluminium foil paper. The mixture was allowed to dissolve over a time frame of 48 hours and tested using the JENWAY 4510 to test for conductivity, temperature and salinity. Results and Discussion The results of the laboratory experimental analyses for conductivity, salinity and temperatures of the soil samples as well as the weight of each sample are presented in charts (Figure 3). It is revealed that soil conductivity ranges 0.04 to 0.37 dS/m with mean 0.152 dS/m, soil salinity ranges 0.03 to 0.18 percent with mean value of 0.08 percent and soil temperature measured ranges from 29.0 to 30.0 °C with mean temperature value of 29.4 °C. The level of the soil conductivity is optimal and the measured soil salinity is adjudged relatively low. Conclusions In this study, soil conductivity, salinity and temperature were investigated in a farmland within the covenant University Nigeria campus. The aim of the research is to assess the level of salt concentration in the soil using electrical method. The study is essential as the salinization of soil is challenge worldwide and the knowledge of soil properties are important for farmers so as to know the crop type to be cultivated for a particular farmland. The results of the laboratory experimental analyses shows that measured petrophysical parameters are within the standard regulatory level and the level of salinization is relatively normal within the farmland making it suitable for plant growth. It is however recommended that routine check on the salinity test should be done for adequate monitoring.
2019-10-10T09:27:57.198Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "ff126354ea29cd1b56ed646c83618f28b7db58ef", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1299/1/012081", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3b5c3b2a372455b05794d5f80da26dbce9d97027", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
97282300
pes2o/s2orc
v3-fos-license
Using Molecular Design to Increase Hole Transport: Backbone Fluorination in the Benchmark Material Poly(2,5‐bis(3‐alkylthiophen‐2‐yl)thieno[3,2‐b]‐thiophene (pBTTT) The synthesis of a novel 3,3′‐difluoro‐4,4′‐dihexadecyl‐2,2′‐bithiophene monomer and its copolymerization with thieno[3,2‐b]thiophene to afford the fluorinated analogue of the well‐known poly(2,5‐bis(3‐alkylthiophen‐2‐yl)thieno[3,2‐b]‐thiophene) (PBTTT) polymer is reported. Fluorination is found to have a significant influence on the physical properties of the polymer, enhancing aggregation in solution and increasing melting point by over 100 °C compared to nonfluorinated polymer. On the basis of DFT calculations these observations are attributed to inter and intramolecular S…F interactions. As a consequence, the fluorinated polymer PFBTTT exhibits a fourfold increase in charge carrier mobility compared to the nonfluorinated polymer and excellent ambient stability for a nonencapsulated transistor device. Introduction Polythiophenes and thiophene containing derivatives have been the subject of much research in the fi eld of organic semiconductors, owing to their relative stability, ease of synthesis, tuneable energy levels, and propensity for self-assembly. The popularity of poly(3-hexylthiophene) ( P3HT ) as an active material in both organic-fi eld-effect transistors (OFETs) and organic photovoltaics has led to many derivatives being investigated over the past decade. [1][2][3] In many semicrystalline polymers such as polythiophene derivatives, increased aggregation strongly correlates with increased transistor performance, providing the polymer aggregates and crystallites are appropriately interconnected with so-called 'tie-molecules'. [4][5][6][7] Processing techniques such as aromatics and can be overcome by chemically fusing the units together. [ 12,18 ] Another approach is to promote nonbonding interactions between adjacent aromatic monomers to assist backbone planarization. [19][20][21][22] We were particularly interested to investigate the effect of thiophene fl uorination in PBTTT because nonbonding S … F interactions have previously been shown to have a planarization effect in conjugated polymers. [23][24][25][26][27] In addition, fl uorination could increase the IP of PBTTT as it does for F-P3ATs , and hence the oxidative stability of a polymer often considered as on the cusp of air stability. [ 13,24,25,[28][29][30] In considering PBTTT , there are two possible sites of fl uorination: on the fused thieno [3,2b ]thiophene or on the alkylated bithiophene co-monomer. We chose to explore the infl uence of fl uorination on the bithiophene monomer because in this arrangement the fl uorine substituents are 'head-to-head' with respect to each other which should maximize any possible S … F interactions. In this manuscript, we present a comparative study of hexadecyl derivatives of PBTTT and poly(2,5-bis(4-fl uoro-3-hexadecyl-thiophen-2-yl)thieno [3,2- We explore the effect of fl uorination on backbone planarity though density functional theory (DFT), present the synthesis and characterization of these polymers, and probe the infl uence of fl uorination on OFET performance and air stability, as well as thin fi lm morphology through X-ray diffraction analysis and atomic force microscopy (AFM). DFT Calculations DFT calculations are a useful tool to predict and assess the potential planarity of conjugated polymers. [ 19 ] Trimeric units (for A-B copolymer systems) are often used as analogues to the polymer as they strike an appropriate balance between predicting the basic properties of interest, while allowing these calculations to be completed within reasonable computational time. For this reason, we opted to run the geometry optimization calculations on units where the hexadecyl side chains were replaced with propyl groups in order to emulate the steric bulk near the polymer backbone while keeping computational time low. The minimum energy conformations of the monomeric species were optimized using the B3LYP functional and a 6.31g(d) basis-set, and from these optimized geometries the minimum energy conformations of the corresponding trimers ( BTTT and FBTTT ) were calculated using the same level of theory. The optimized ground-state geometries of the fl uorinated and nonfl uorinated trimers are shown in Figure 1 a. The bithiophene link exhibits a dihedral angle ( θ 1 ) of 12.7° in the nonfl uorinated trimer, but upon fl uorination this angle becomes 0.1°, indicating that the two fl uorinated thiophenes are essentially coplanar. We believe this planarization is Coulombic in nature, as is the case in the aforementioned F-P3AT systems. [ 24 ] Mulliken charges show that both the hydrogen atom in the 4-position of the thiophene unit and the proximate sulfur atom in the neighboring thiophene ring have a slight positive charge in the BTTT trimer, tending to electrostatic repulsion and therefore a nonnegligible torsional angle. In contrast, in FBTTT the 4-position is occupied by a fl uorine atom that exhibits a slight negative charge, leading to an attractive interaction, consequently reducing the torsional angle to near planarity. While we acknowledge that Mulliken charges must be treated with caution in conjugated systems, they nevertheless provide an indication of the sign of the charge on atoms, which in this case proves crucial to rationalizing the seemingly contradicting planarization incurred by substituting a hydrogen atom for a fl uorine atom that has a larger covalent radius. Also of interest is the very slight planarization of the thiophene-thienothiophene link ( θ 2 ), from 41.8° to 39.1° upon fl uorination of the bithiophene unit. The planarizing effect of the fl uorine substitution is particularly evident when observing the change in potential energy as a function of θ 1 for the trimers, as shown in Figure 1 b. We note that the slight asymmetry in the potential energy scan is present regardless of the rotation direction calculated. Aside from the minimum energy conformation being closer to planarity in FBTTT , the barrier to rotation is also much larger in FBTTT than BTTT , and the syn conformation of the thiophene−thiophene link is strongly disfavored in the former case. This conformation places both the fl uorine atoms in close proximity, with the steric and electrostatic repulsion leading to the observed higher potential energy. This is less apparent in BTTT , presumably due to the smaller Van der Waals radius of hydrogen compared to fl uorine, and lower partial charge (+0.15 and −0.28 respectively). A Boltzmann analysis of the relative populations at room temperature ( Figure 1 c) clearly illustrates this. Indeed, while BTTT shows a non-negligible population of syn thiophenethiophene links ( θ 1 = ±180°), FBTTT exhibits a narrow distribution of conformations around the trans coplanar conformer ( θ 1 = 0°) and a very small proportion of the competing syn analogue. Together these results suggest that fl uorination will result in a signifi cant increase in the rigidity of the backbone and a preference for the trans coplanar conformation. Visual representations of the HOMO and LUMO of BTTT and FBTTT trimers are shown in Figure S1 (Supporting Information). Both show extended delocalization of the two frontier molecular orbitals, typical of all-donor conjugated units. [ 17,44 ] The substitution of hydrogen atoms by fl uorine in the bithiophene unit seems to have only a minimal effect on the orbital distribution, in the form of a minor contribution to both the HOMO and the LUMO. Synthesis Building upon our recent synthesis of short chain 3-alkyl-4-fl uorothiophene derivatives, we decided to utilize the fl uorinated building block 1 and introduce the hexadecyl side chain via a cross-coupling methodology ( Scheme 1 ). [ 24 ] Since backbone fl uorination can result in a reduction in polymer solubility, the hexadecyl side chain was chosen as it is the longest solubilizing group which has been demonstrated to still afford good FET performance in PBTTT . [ 24,28,31 ] Alkylation of 1 was best achieved via Negishi coupling using commercially available hexadecylzinc bromide in the presence of catalytic Pd(dppf) Cl 2 . Superheating of the tetrahydrofuran (THF) solvent to 100 °C in a microwave reactor was required to afford good yields of 2 . This was purifi ed either by reverse-phase chromatography on C18-functionalized silica using a mixed acetonitrile/THF eluent, or the crude mixture was deprotected to give 3 , which was then purifi ed by normal-phase fl ash chromatography on silica. We opted for the latter for larger scale reactions, since unfunctionalized silica can support a higher loading capacity. Tail-to-tail dimerization of 3-alkylthiophenes is usually easily achieved through lithiation followed by oxidative coupling using copper(II) chloride. [ 32 ] Though the initial lithiation of 3 was achieved, as confi rmed by quenching with a solution of iodine in dry THF, the oxidative coupling step proved problematic. Evidently, the fl uorine substitution at the 4-position plays a role in hindering the oxidative coupling with copper(II) chloride. We therefore decided to regioselectively metallate 3 using the sterically hindered Knochel-Hauser base before adding 0.5 equivalents of Ni(dppp)Cl 2 . Reductive elimination from the resulting Ni(II) salt afforded the desired tail-to-tail bithiophene 4 in good yield (73%). Subsequent bromination led to monomer 5 , which was copolymerized with 2,5-bis(trimethylstannyl) thieno [3,2b ]thiophene under microwave polymerization conditions to yield PFBTTT . [ 33 ] For comparison purposes, PBTTT was synthesized under identical microwave polymerization conditions. Both polymers were purifi ed by precipitation and subsequent Soxhlet extraction with methanol, acetone, and hexane to remove catalyst impurities and low molecular weight oligomers. In the case of PFBTTT the crude polymer was also extracted with chloroform to remove low molecular weight poly meric material, before extraction into chlorobenzene. In both cases, a fi nal purifi cation by precipitation of chlorobenzene solutions into methanol was performed. Physical Properties In order to ensure a fair comparison and exclude molecular weight effects, which are known to affect the performance of PBTTT , [34][35][36] we compared batches of similar molecular weight distribution PBTTT and PFBTTT ( M n 42 kDa, Ð 1.5 and M n 44 kDa, Ð 2.0, respectively, as measured by high-temperature gel-permeation chromatography (HT GPC) against polystyrene standards). The basic physical properties of PFBTTT all tend to suggest a greater degree of aggregation when compared to its nonfl uorinated analogue PBTTT . First, PFBTTT is only soluble in near-boiling chlorobenzene, rendering processing from this traditional solvent diffi cult. We therefore adopted 1,2,4-trichlorobenzene (TCB) as a solvent for most processing techniques due to its good solubilizing properties and high boiling point. In TCB, PFBTTT is readily soluble at temperatures exceeding 135 °C, yet precipitates soon after cooling to 130 °C, while PBTTT remains soluble for several minutes at room temperature. The solution UV-vis spectra of PBTTT , both hot and at room temperature, exhibit a single broad absorption band typical of fully solvated polythiophene derivatives ( Figure 2 and Table 1 ). [ 37 ] The slight blueshift of 12 nm of the absorption maximum upon increasing the solution temperature is attributed to the increased backbone torsion, which reduces the effective conjugation length. In contrast, the hot and room temperature solution spectra of PFBTTT are very different. The hot solution spectrum shows that the majority of the absorption arises from solvated polymer, similar to PBTTT . There is very little difference in λ max for the two polymers in hot solution. When the FBTTT solution is cooled, the absorption spectrum redshifts by 69 nm to give an absorption spectrum which strongly resembles that of the thin fi lm, suggesting the polymer has planarized and aggregated in solution, similar to the formation of polythiophene aggregates in poor solvents. [38][39][40][41] Even in the hot solution, the presence of a slight shoulder at higher wavelengths suggests some aggregate is still present. Upon thin fi lm formation, both polymers display a redshift in λ max of around 70 nm compared to the hot solutions as the polymers planarize in the solid state. In the case of PFBTTT , this is accompanied by the appearance of pronounced shoulders both at longer and shorter wavelengths. These shoulders are typical of the vibronic progression observed in many polythiophenes and ascribed to the formation of ordered aggregates in the fi lm. For PBTTT , the vibronic features are less obvious in the as-spun fi lm, but become sharper upon thermal annealing at 200 °C. Thermal annealing also results in changes to the PFBTTT spectra, with the longer wavelength shoulder increasing in intensity and the short wavelength diminishing. These changes are consistent with some structural reordering in the fi lms upon annealing. Both polymers exhibit very similar optical band gaps, as measured by the absorption edge of the as-spun fi lms ( Table 1 ). Thermal annealing results in fi lms with an identical band gap. The effect of the fl uorination on the IP of thin fi lms was minimal, with no difference observed within the experimental error (±0.05 eV) of the measurement by photoelectron spectroscopy in air (PESA) ( Table 1 ). This is in contrast to the results for F-P3AT s, in which a clear (≈0.25-0.4 eV) increase in IP was observed in comparison to the analogous P3AT s, as well as a widening in the optical band gap. [ 24 ] The DFT calculations suggest that fl uorination would result in a modest stabilization of the HOMO of 0.16 eV and the LUMO of 0.18 eV over the nonfl uorinated trimer. The difference between the experimental and the theoretical results may therefore be related to error of the PESA measurement, since we note that the transistor results (vide infra) show higher contact resistance for the fl uorinated polymer over the nonfl uorinated polymer, which would support a slight increase in IP upon fl uorination. The thermal properties of the two polymers were investigated by differential scanning calorimetry (DSC). Our initial experiments utilized conventional DSC at heating rates of 10°C min −1 ( Figure S13, Supporting Information). In these experiments, we were able to clearly observe the LC phase of PBTTT , with two well-defi ned endotherms on heating, at 126 and 238 °C, which have previously been attributed to a side chain and backbone melt. [ 42,43 ] However, despite heating to 390 °C, we could not observe a well-defi ned backbone melt for PFBTTT . We therefore moved to fl ash DSC in which heating rates of 500 K s −1 are obtainable. Such rapid heating and cooling rates increase the sensitivity of the measurement and also allow the investigation of temperatures regimes not readily accessible with conventional DSC due to competing decomposition processes. The fl ash DSC thermograms of both polymers are shown in Figure 2 . The effect of the two fl uorine atoms on the thermal properties is remarkable, with an increase in the backbone melt of approximately 100°C, from 250°C for PBTTT to 350 °C for PFBTTT . Note that melting enthalpies are not readily accessible with fl ash DSC due to the small sample size. This increase is much more substantial than would be expected by a simple molecular weight argument (based upon the increased atomic mass of F over H), and is therefore supportive of the increased intra-and intermolecular interactions that would result from the more coplanar bithiophene unit suggested by DFT calculations. It is also interesting that the temperature of the low-temperature endotherm reduces upon fl uorination, from 66 to 55 °C. A similar reduction in side-chain melting point is observed for a PBTTT analogue in which the alkyl side chains are moved from the bithiophene unit, in which each thiophene can rotate independently, to a thieno[3,2-b]thiophene in which case each alkyl side chain must move co-operatively. [ 30 ] A similar co-operative movement of the side chains might be expected for the fl uorinated bithiophene if the S…F interaction was pronounced, as suggested by the DFT calculations. Finally, we note that thermogravimetric analysis of PFBTTT demonstrates that it exhibits excellent thermal stability, with 5% weight loss occurring only beyond 420 °C (Figure S14, Supporting Information). Transistor Fabrication and Characterization The electrical properties of the polymers were studied employing thin fi lm transistors. Films were prepared by spin-coating from TCB in both cases, followed by annealing at 200 °C for 30 min. We note that the transistor performance of PBTTT , to the best of our knowledge, has not been previously reported for fi lms cast from TCB. Bottom-gate top-contact confi guration devices for both polymers exhibit typical unipolar hole transporting behavior with low hysteresis between the forward and reverse gate voltage ( V G ) sweeps and moderate on/off channel current ratios of ≈10 3 -10 4 , as shown in Figure 3 . It is notable that both the peak and average charge carrier mobility values increase upon fl uorination by approximately a factor of 4, with the best devices exhibiting a saturated mobility of ≈0.32 cm 2 V −1 s −1 for the fl uorinated polymer, compared to ≈0.069 cm 2 V −1 s −1 for the nonfl uorinated analogue. The performance data of the two polymers is summarized in Measured in 1,2,4-trichlorobenzene (TCB) at 140 °C relative to polystyrene standards; b) Measured in room temperature TCB, or hot (>80 °C) TCB in parentheses; c) Films spin-coated from TCB; d) Optical band gap estimated from the absorption onset of as-spun fi lms; e) Ionization potential measured by photoelectron spectroscopy in air (PESA) error ± 0.05 eV; f) Lowest unoccupied molecular orbital estimated from the optical band gap and PESA measurements; g) Measured by fl ash DSC at heating and cooling rates of 500 K s −1 . donor-acceptor polymers, these values are still high for an all-donor system. [ 17,44 ] While the mobilities reported here for PBTTT devices (max 0.069 cm 2 V −1 s −1 ) are lower than those often reported in the literature (0.1-1 cm 2 V −1 s −1 ), we attribute this mainly to the longer hexadecyl alkyl-chain length used in our case (as opposed to tetradecyl used previously in higher performing PBTTT -based transistors), as well as the different processing solvent (TCB compared to mixtures of chloroform and chlorobenzene). In fact, the few reports of hexadecyl-substituted PBTTT transistor properties involve either highly optimized and rigorous device fabrication process or a tailored 'spreadand-compress' fi lm formation on top of an ionic liquid. [ 29,45,46 ] The marginally higher threshold voltages ( V TH ) required for PFBTTT devices along with the slightly sigmoidal output curves and the increasing slope of the sd 1/2 I versus V g curve suggest increased contact resistance and/or localized high-energy trap states for PFBTTT over PBTTT . [ 47 ] Previously, the use of high work function electrodes like Pt has been shown to reduced contact resistance in PBTTT and may provide a possible solution to further improve device performance. [ 14 ] Previous reports have demonstrated that fl uorination of the conjugated side chain or backbone can result in improved device stability in the presence of ambient air, with the effect being ascribed to either an increase in the IP as a result of the electron withdrawing infl uence of fl uorine or to a kinetic effect reducing water ingress due to closer packing of the conjugated molecules in the solid state. [ 48,49 ] Therefore, we investigated the stability of the PBTTT and PFBTTT devices by removing them from the glove box and storing them in the dark under ambient conditions with an average temperature of 20 °C and relative humidity of 50%. The air-stability of PBTTT -based devices varies considerably in the literature. In the case of C 12 -and C 14 -PBTTT devices, stability in ambient conditions is known to be quite poor, with off current rising and charge mobility dropping to about 20% of the original value after 5 and 22 d, respectively. [ 13,30 ] Humidity was shown to have a signifi cant deleterious impact on the charge carrier mobility, with storage at low humidity shown to drastically improve the operational lifetime of these devices. [ 30 ] High humidity levels have been shown to result in the formation of charge traps in the fi lm in studies on related polythiophenes. [ 50 ] It is worth noting that the increase in off-current and shift in threshold voltage reported for many polythiophene derivatives may not be due solely to the effects of oxygen or water, but also to minor impurities in ambient air such as ozone, which can act as a reversible dopant Adv. Funct. Mater. 2015, 25, 7038-7048 www.afm-journal.de www.MaterialsViews.com for the polymer. [ 51 ] To our knowledge, the only stability study performed on the C 16 -PBTTT was performed by Umeda et al. in which repeated stressing in ambient conditions over 2 d had little impact on the fi eld effect characteristics such as mobility and on/off ratio. [ 29 ] The transfer plots of PBTTT and PFBTTT after 3, 55, and 75 d storage in the dark in ambient air are shown in Figure 4 and the data are summarized in Table 3 . For both polymers the charge carrier mobility remained relatively constant over the test period, but we did observe changes in the threshold voltage and on/off channel current ratio over time. In particular, the off current rose rapidly after 3 d, with a large positive shift in the threshold voltage for both materials. Upon continued storage the threshold voltage moved back toward the original values, and the off currents dropped. The fl uorinated polymer consistently maintained a higher on/off ratio than PBTTT , with the device after 75 d exhibiting a value around 5 × 10 4 . The PFBTTT transistor also demonstrated a sharper turn-on with a narrower subthreshold swing than PBTTT . As discussed above, the shifts in threshold and increase in off-current are typical of oxidative doping making the device more conductive, and as such diffi cult to switch-off. The fl uctuations in threshold and off ratio over time suggest that this doping is reversible, and the lower currents for the fl uorinated polymer suggest it is less susceptible to such doping, in common with other fl uorinated materials. [ 22,52 ] Thin Film Morphology While it is widely accepted that PBTTT possesses a high degree of order in the lamellar direction, the π-stacking direction exhibits a relatively high degree of paracrystallinity. [ 53,54 ] The factors that allow PBTTT to have high hole mobilities are therefore considered to be its relatively low-energy trap states and high edge-on orientation. [53][54][55] In order to assess the impact of PBTTT fl uorination on its orientational order, we performed grazing incidence wide-angle X-ray scattering (GIWAXS) on fi lms prepared in the same way as the OFET devices ( Figure 5 ). Both fi lms exhibit diffraction patterns consistent with lamellar ordering of the polymers with an edge-on orientation with respect to the substrate. In the as cast fi lms, the crystalline domains of PFBTTT possesses a higher degree of edgeon orientation, which is apparent from the more localized out-of-plane scattering patterns corresponding to diffraction in the lamellar direction. Indeed, arcing of these diffraction peaks is attributed to misalignment of the crystallites with respect to the surface. [ 15 ] The lamellar spacing is similar for both polymers, at 2.33 nm for PFBTTT and 2.35 nm for PBTTT . This is in agreement with the d -spacing previously observed for C 16 -PBTTT and suggests that the alkyl side chains of adjacent polymer backbones are interdigitated for both polymers. [ 56 ] Annealing the fi lms results in an increase in the intensity of the diffraction peaks for both poly mers, most clearly observed in the out-of-plane line profi les in the Supporting Information ( Figure S15). The d -spacing does not change for PFBTTT , while it slightly narrows for PBTTT to 2.315 nm, suggesting either enhanced crystallinity or a change in crystal orientation in the fi lm. That the in-plane lamellar diffraction peaks gradually disappear for PFBTTT with annealing would suggest the latter, that the misaligned polymer domains reorientate to become predominately edge-on. For PBTTT , the out-of-plane peaks also increase in intensity and the arcing reduces, consistent with a similar increase in edge-on alignment. However the in-plane scan suggests that the misaligned domains remain upon annealing ( Figure S15, Supporting Information), which was not the case for PFBTTT . High-resolution X-ray diffraction measurements also show an increase in intensity of the (100) and corresponding higher order diffraction peaks upon annealing for both polymers ( Figure S16, Supporting Information), thus confi rming the increased crystallinity suggested by GIWAXS. The greater orientational order in PFBTTT compared to PBTTT could therefore be one of the factors resulting in the increase in mobility upon fl uorination. We also note that PBTTT fi lms annealed at 150 °C exhibit a split in the (100) and higher order out-of-plane signals, possibly due to different degrees of interdigitation, and side-chain reorganization. The surface morphology of the fi lms was also investigated by AFM in fi lms made under the same conditions as used for device fabrication. The fi rst point to note is that the fi lm morphology achieved for PBTTT from spin-coating from TCB Adv. Funct. Mater. 2015, 25, 7038-7048 www.afm-journal.de www.MaterialsViews.com is very different from the terraced nanostructure observed when spin-coating from other solvents such as 1,2-dichlorobenzene and chloroform. [ 13,34,42 ] Indeed we observe that PBTTT appears to have a large quantity of pin holes distributed across the whole fi lm ( Figure S17, Supporting Information), which were likely formed by the solventvapor bubbles during drying. Although fi lms of PBTTT and PFBTTT have similar a similar RMS roughness (6.79 nm and 6.99 nm respectively), these holes probably have a detrimental effect on the continuity of the fi lm and introduce extra traps, potentially hindering the transport of charge carriers. However, in the case of PFBTTT , we observe a fi lm comprised of a more fi brillar morphology ( Figure S17, Supporting Information). Though highly entangled, the fi brillar structure could facilitate the release of solvent vapor, leaving a more continuous fi lm with fewer pinholes. In fact, when diluted to 2 mg mL −1 , PFBTTT forms interconnected nanofi brils, with heights of approximately 5 nm and widths of 200-500 nm ( Figure 6 ), unlike PBTTT under the same dilution ( Figure S18, Supporting Information). This network of intricately woven fi bers could be a morphological explanation for the increased mobility observed in PFBTTT . Indeed, recent studies have shown that interconnectivity of ordered domains is crucial to achieve high mobilities with polythiophene derivatives. [ 2,21,22 ] Conclusions In conclusion, we have described the synthesis of a novel 3,3′-difl uoro-4,4′-dihexadecyl-2,2′-bithiophene monomer and report its copolymerization with thieno[3,2b ]thiophene to afford the fl uorinated analog of the well-known PBTTT polymer. We fi nd that backbone fl uorination has a pronounced infl uence on the physical properties of the polymer, with a signifi cantly enhanced degree of aggregation compared to the nonfl uorinated analog. Remarkably, we fi nd that the incorporation of just two fl uorine atoms on the polymer backbone results in an increase in the polymer melting temperature of 100 °C. DFT calculations suggest this increased aggregation originates from a greater degree of backbone planarity and rigidity. A result of this greater rigidity is an enhancement of the edge-on orientational order, and consequently a significant increase in hole mobility in OFET devices, which is also aided by an appreciably interwoven fi brillar morphology. In addition, the fl uorinated polymer exhibits excellent ambient stability for a nonencapsulated transistor device. We believe Adv. Funct. Mater. 2015, 25, 7038-7048 www.afm-journal.de www.MaterialsViews.com that these results demonstrate that backbone fl uorination is a useful tool in designing high-performance organic semiconductors. Synthesis of 3-Fluoro-4-Hexadecylthiophene (3) : In a dry 20 mL Biotage microwave vial under argon, 1 (2.00g, 6.15 mmol), and [1,1′-bis(diphenylphosphino)ferrocene]dichloropalladium(II) dichloromethane complex (251 mg, 0.307 mmol) were added. The vial was capped, evacuated, and backfi lled with argon three times, before adding hexadecylzinc bromide solution (16.0 mL, 0.5 M in THF). After stirring at room temperature for 2 min, the vial was heated for 1 h in a microwave reactor at 100 °C. The resulting mixture (solid at room temperature) was heated to 40 °C, poured into acetone, and fi ltered. The solvent of the fi ltrate was removed in vacuo, and the residue passed through a pad of silica using hexane, and the solvent was again removed in vacuo. In a 100 mL round-bottomed fl ask, the resulting crude mixture of 2 was dissolved in dry THF (10 mL) and cooled to 0 °C before N -tetrabutylammonium fl uoride (21.5 mL, 1 M solution in THF) was added. The reaction was stirred for 2 h before quenching with water and extracting with diethyl ether. The organic extracts were dried over MgSO 4 and the solvent removed in vacuo. The crude mixture was purifi ed by column chromatography on silica, using hexane as eluent, to yield 3 as a white waxy solid (300 mg, 15% over two steps). (4) : In a dry 20 mL Biotage microwave vial under argon, 3 (400 mg, 1.23 mmol) was dissolved in dry THF (3.5 mL), and (2,2,6,6-tetramethylpiperidinyl) magnesium chloride lithium chloride complex (1.59 mL, 1 M solution in THF/toluene) was added dropwise at room temperature. The reaction was stirred for 1 h before a dispersion of [1,3-bis(diphenylphosphino)propane] dichloronickel(II) (333 mg, 0.615 mmol) in dry THF (7 mL) was added. The solution turned from orange to dark brown/black and solidifi ed. After diluting with THF, the reaction mixture was poured into dilute HCl and extracted with chloroform. The solvent was removed in vacuo, and the residue passed through a small pad of silica using dichloromethane as eluent. The solvent was removed in vacuo, and recrystallized from acetone, to yield 4 as a pale yellow solid (290 mg, 73% Synthesis of 5,5 ′ -Dibromo-3,3 ′ -Difl uoro-4,4 ′ -Dihexadecyl-2,2 ′ -Bithiophene (5) : In a 100 mL round-bottomed fl ask wrapped in foil, 4 (265 mg, 0.408 mmol) was dissolved in a mixture of chloroform (20 mL) and acetic acid (3 mL), and to this solution was added N -bromosuccinimide (154 mg, 0.815 mmol). The solution was stirred overnight, quenched with saturated sodium sulfi te, and extracted with chloroform. The organic layer was washed with 1 M sodium hydroxide, water, and brine, and the solvent was removed in vacuo. The crude product was recrystallized from a mixture of acetone and ethyl acetate (1:1), to yield 5 as a pale yellow solid (245 mg, 75% [3,2-b]-thiophene] (PFBTTT) : In a dry 0.5-2 mL Biotage microwave vial, 6 (202.9 mg, 0.2509 mmol), bis(2,5-trimethylstannyl)thieno[3:2b ] thiophene (116.9 mg, 0.2509 mmol), tris(dibenzylideneacetone) dipalladium(0) (4.1 mg, 2 mol%), and tris( o -tolyl)phosphine (6.1 mg, 8 mol%) were added, and the vial capped and evacuated for 10 min. After backfilling with argon, degassed chlorobenzene (1.2 mL) was added, and the solution was degassed for a further 10 min. The mixture was then heated in a microwave in steps as follows: 100, 120, 140, 160 °C for 2 min each, and finally 180 °C for 30 min. After cooling to room temperature, the dark purple gel was precipitated in methanol from chlorobenzene, and purified by Soxhlet extraction (glass thimble), washing with methanol, acetone, hexane, (each overnight), chloroform (3 h), and finally extracting the polymer with chlorobenzene. Most of the solvent was removed in vacuo, before precipitating the polymer into methanol and filtering (184 mg, 93% DFT Calculations : DFT calculations were carried out using the B3LYP hybrid functional and the 6-31g(d) basis set in the GAUSSIAN09 software package. [ 59 ] Alkyl chains were replaced with a propyl group to simplify calculations and reduce computational time. Structures were optimized, and a frequency analysis was performed. Potential energy scans were performed on the trimers using the redundant coordinate editor and scanning the indicated dihedral angle in 36 steps of 10° increments. Characterization : 1 H, 19 F, and 13 C NMR spectra were recorded on a Bruker AV-400 (400 MHz), using the residual solvent resonance of chloroformd or 1,1,2,2-tetrachloroethaned 2 and are given in ppm. Microwave experiments were performed in a Biotage initiator V 2.3. Polymer molecular weight and dispersity ( Ð ) analysis was completed via GPC in TCB at 140 °C using a Polymer Laboratories PL-220 HT GPC instrument calibrated against polystyrene standards. Electrospray mass spectrometry was performed with a Thermo Electron Corporation DSQII mass spectrometer. UV-vis spectra were recorded on a UV-1800 Shimadzu UV-vis spectrometer. Flash chromatography was performed on silica gel (Merck Kieselgel 60 230-400 mesh) or on reverse phase silica (Biotage SNAP KP-C18-HS cartridges). PESA measurements were recorded with a Riken Keiki AC-2 PESA spectrometer with a power setting of 5 nW and a power number of 0.5. Samples for PESA were prepared on glass substrates by spin-coating. DSC measurements, using ≈3 mg of material, were conducted under nitrogen at scan rate of 10 °C min −1 with a TA DSC-Q20 instrument. Flash DSC was performed on a Mettler Toledo Flash DSC 1 at a scan rate of 500 K s −1 . AFM images were obtained with a Picoscan PicoSPM LE scanning probe in tapping mode. GIWAXS measurements were performed at D-line, Cornell High Energy Synchrotron Source (CHESS) at Cornell University. A wide band-pass (1.47%) X-ray with a wavelength of 1.15 Å was shone on the samples with a grazing incidence angle of 0.15 o . A Pilatus 200k area detector was placed at a distance of 195 mm from the samples. A 1.5 mm wide tantalum rod was used to block the intense scattering in the smallangle area. The exposure time was 1 s. High-resolution X-ray diffraction measurements were carried out at G2, CHESS at Cornell University. The thin fi lms were aligned on a Kappa diffractometer to record the θ -2 θ www.afm-journal.de www.MaterialsViews.com scans. The wavelength of X-ray was 1.107 Å. An attenuator was used to allow ≈1/7 beam fl ux through and avoid saturation in the case of PBTTT sample at 200 °C annealing. Device Fabrication and Characterization : All fi lm preparation and characterization steps were carried out under inert atmosphere. Bottomgate/top-contact devices were fabricated on heavily doped n + -Si (100) wafers with 400 nm thick thermally grown SiO 2 . The Si/SiO 2 substrates were treated with trichloro(octadecyl)silane to form a self-assembled monolayer. The polymers were dissolved in hot TCB (5 mg mL −1 ) and spin cast at 2000 rpm from a hot solution for 60 s before being annealed at 200 °C for 30 min. Au (30 nm) source and drain electrodes were deposited onto the polymer fi lm under vacuum through shadow masks. The channel width and length of the transistors are 1000 and 50 µm, respectively. Transistor characterization was carried out under nitrogen using a Keithley 4200 parameter analyzer. Mobility was extracted from the slope of I D 1/2 versus V G . Supporting Information Supporting Information is available from the Wiley Online Library or from the author. Additional data relating to the paper can be found at doi.org/10.6084/m9.fi gshare.1539547
2019-04-06T13:08:55.507Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "e592589e1c917ebd6056761d5afacac7ba0bf98e", "oa_license": "CCBY", "oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adfm.201502826", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e3d61592f5f6a3c90fde0edde861c0f34312117e", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
254823541
pes2o/s2orc
v3-fos-license
SADM: Sequence-Aware Diffusion Model for Longitudinal Medical Image Generation Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results in high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods. The code is available at https://github.com/ubc-tea/SADM-Longitudinal-Medical-Image-Generation. Introduction Iconic advancements of generative models in the medical domain have been possible due to several factors, such as state-of-the-art computational hardware and, Age 18 Age Hence, promising solutions for medical image synthesis, restoration, acceleration, and many other tasks have been proposed over the past few years [19]. Recent efforts to generate longitudinal medical images were mainly proposed for the following two tasks: 1) generation of longitudinal brain image [16], which takes a source brain image and generates a new image with respect to chronological age (i.e., normal) progression or disease (i.e., abnormal) progression [11]; and 2) generation of multi-frame cardiac image [4,12], which typically take a starting frame of a cardiac cycle (i.e., end-diastolic or ED phase) and generates the final frame of the cycle (i.e., end-systolic or ES phase). An illustration showing examples of these tasks is presented in Fig. 1. Generative adversarial networks (GANs) have been a de facto standard for these tasks in the past few years, but recent advances in diffusion models have shown promising results. For example, the latent diffusion model, which uses the latent embedding of an image as input to the diffusion model to improve computational efficiency, has been used to synthesize high-quality 3D brain MRI [17]. Similarly, a diffusion model was combined with a deformation image registration model, namely VoxelMorph [2], to synthesize the end-systolic frame of cardiac MRI [12,13]. However, most of these works rely on the input from a single image to generate its longitudinal images. Even when longitudinal samples are available, these methods often ignore the sequential dependency in the medical domain. Sequence-aware [18] deep generative models are a class of generative models that can learn the sequential or temporal dependency of the longitudinal input data. A sequence is defined as an ordered and timestamped set [18], and sequence-aware generative models take input from a sequence of images and output a generated image (formal definition in Problem settings in Section 3). Although such generative models for video datasets have been studied that often take a sequence of frames as input and learn their temporal dependence [1,10], the existing solutions are not feasible for longitudinal medical data generation tasks. Because video datasets rarely have issues very common in the medical domain, such as 1) longitudinal data scarcity, 2) missing frames or data, 3) high dimensionality, and 4) low temporal resolution. To this end, we explore ways to address these issues and propose a novel generative model for longitudinal medical image generation that can learn the temporal dependency given a sequence of medical images. Our proposed method is named sequence-aware diffusion model (SADM). Specifically, during training, SADM learns to estimate attentive representations in the longitudinal positions of given tokens based on sequential input, even with missing data. In inference time, we use an autoregressive sampling scheme to effectively generate new images. Our extensive experiments on longitudinal 3D medical images demonstrate the effectiveness of SADM compared to baselines and alternative methods. The contributions of our SADM are as follows: 1. To the best of our knowledge, we are one of the first to explore the temporal dependency of sequential data and use it as a prior in diffusion models for medical image generation. 2. Our proposed SADM can work in various real-world settings, such as single image input, longitudinal data with missing frames, and high-dimensional images via the essential transformer module design. 3. We present state-of-the-art results in longitudinal image generation and missing data imputation for a multi-frame cardiac MRI and a longitudinal brain MRI dataset. Diffusion Models Diffusion models consist of a forward process, which starts with the data x ∼ p(x) and gradually adds noise to obtain a noisy version of the data z = {z t |t ∈ [0, 1]}, and a reverse process, which reverts the forward process by predicting and subtracting the noise in the reverse direction (i.e., from t = 1 to t = 0). Formally, following [9], we define the forward process q(z|x) specified in continuous time 0 ≤ s < t ≤ 1 as: where α 2 t = 1/(1 + e −t ) and σ 2 t = 1 − α 2 t are the continuous-time noise schedules, σ 2 t|s = 1 − e λt−λs σ 2 t is the variance term of the s to t transition, and λ t = log[α 2 t /σ 2 t ] is the signal-to-noise-ratio of the noise schedules that is monotonically decreasing [14]. This forward process can be reformulated in the reverse direction as q(z s |z t , x) = N (μ s|t (z t , x), (σ 2 s|t )I), whereμ s|t (z t , x) = e λt−λs (α s /α t ) z t + 1 − e λt−λs α s x andσ 2 s|t = 1 − e λt−λs σ 2 s . The reverse process is parameterized by a generative modelx θ in the form: where the varianceΣ s|t = (σ 2 s|t ) 1−v (σ 2 t|s ) v is an interpolation betweenσ 2 s|t and σ 2 t|s [14], and v is the hyperparameter that controls the stochasticity of the sampler [15]. We use the ancestral sampler [8] where λ 0 < ... < λ T = λ 1 for discrete T time steps: Classifier-free Guidance There are two branches of conditioning methods for diffusion models: 1) Classifier guided [5]; and 2) Classifier-free guidance [9]. However, it is often hard to define the problem setting for a classifier in the medical domain, and even the state-ofthe-art classifiers often do not have the performance suitable for classifier-guided models. Thus, we opt to use classifier-free guidance for conditioning the diffusion model. The classifier-free guided diffusion model takes the conditioning signal c as an additional input, and is defined as which is the weighted sum of the model with condition c and model with zero tensor ∅, i.e., unconditional model. The guidance strength w controls the tradeoff between sample quality and diversity, i.e., the higher the guidance strength the lower the diversity. Eq. (4) can also be performed in -space˜ θ (z t , c, During training, we can randomly replace the conditioning signals c by a zero tensor with probability p uncond . Then, the noise-prediction loss term [8] for the reverse process conditional generative model is: whereÎ ∼ Be(p uncond ) is either a zero or an identity tensor sampled from a Bernoulli distribution, SADM: Sequence-Aware Diffusion Model We propose a sequence-aware diffusion model (SADM) for longitudinal medical image generation. Specifically, our proposed SADM uses a transformer-based attention module for conditioning a diffusion model. The attention module is a 4D generalization of the video vision transformer (ViViT) [1], and it is specifically used to generate the conditioning signals for the diffusion model. In this section, we will briefly explain the problem setting and the details of SADM. An overview of SADM is illustrated in Fig. 2 Fig. 3. An illustration of the attention module A θ . The temporal encoder performs temporal self-attention followed by dimension reduction using an MLP, while the spatial decoder performs a spatial self-attention followed by an upsampling operation. Problem setting Let X ∼ p(X) ∈ R L×W ×H×D be a longitudinal 3D medical image with temporal length L. We partition X into conditioning images X C ∈ R n C ×W ×H×D , missing images X M ∈ R n M ×W ×H×D , and future im- . . , f n F } are a sequence of scalar indices for tensor indexing. We define these sequences as ordered, timestamped, and non-intersecting sets [18] such We assume that indices of F are always in future of C and M, i.e., c < f and m < f for all c ∈ C, m ∈ M, and f ∈ F. Also, we assume that the first image of the sequence is known, i.e., c 1 = 1. The objective is to maximize the posteriors p(X M |X C ) and p(X F |X C ), i.e., synthesize the missing and future images given a sequence of conditioning images. Attention Module A θ Unlike many other longitudinal vision datasets (e.g., video data), longitudinal medical images have the following unique properties: various sequence lengths, missing data or frames, and high dimensionality. existing generative solutions used in common computer vision fields are not optimized for these properties, thus longitudinal medical image generation requires a specialized architecture. Inspired by the success of transformers for vision datasets and their ability to calculate attention for long-distance spatio-temporal representations [1], we propose a transformer-based attention module for generating conditioning signals for the diffusion model. This attention module will direct which frames of the conditioning image X C will be beneficial to generate the future or missing frames. An overview of our attention module is illustrated in Fig. 3. Token embedding A common approach to embedding an image or video into tokens is by running a fusing window over the input with non-overlapping strides [1]. Given X ∈ R L×W ×H×D , we run a non-overlapping linear projection window of dimension R l×w×h×d×dim over the image, i.e., a 4D convolution. The resulting unflattened token h have the shape of R missing frames, fusing through the temporal axis is not feasible. Thus, we set l = 1 for experiments conducted in this paper. Also, the temporal resolution of longitudinal images is typically very low compared to its spatial resolution, so setting l = 1 will only slightly affect the computational efficiency. Temporal encoder Inspired by factorized transformers [1], we factorize our attention module into a temporal encoder and a spatial decoder that can benefit from long-range spatio-temporal attention with high computational efficiency. The temporal encoder computes the self-attention temporally among all tokens in the same spatial index. Specifically, it takes the unflattened token and reshapes them into h tmp ∈ R W w · H h · D d ×L×dim , where the leading dimension is the batch dimension. Then, it computes the self-attention along the temporal dimensions, and an MLP reduces the token's dimension by a factor of 2 3 and reshapes them to h tmp ∈ R Spatial decoder The output of the temporal encoder is reshaped into a spatial ×dim , and the spatial decoder calculates selfattention to spatial dimensions between all tokens in the same temporal index. Then, it is upsampled by a factor of 2 for each spatial dimension to obtain Finally, we unflatten and reshape the output of the last block into R W w × H h × D d ×L·dim , and perform upsampling and a 3D convolution operation to obtain the conditioning signal c ∈ R W ×H×D . Since transformers can mask specific indexes of a token, we can train and infer even when there are missing frames in the longitudinal images. However, we have found that using zero tensors for missing frames with non-zero positional encoding performs better than masking missing frames. zs ←μ s|t zt,x θ zt, c i−1 , λt + Σ s|t // Eq. (3) and (4) 14x Conditional Diffusion Model Our proposed SADM follows the formulation of classifier-free diffusion model [9] defined in Section 2. We extend this diffusion model by using a sequence-aware conditioning signal explained in the previous section. Furthermore, we use an autoregressive sampling scheme that can effectively capture the long-distance temporal dependency during inference. Training SADM During training, the input to the diffusion model is a randomly selected target image from unobserved indices M ∪ F and a conditioning signal from previous indices, i.e., x i ∈ R W ×H×D and c = A({x 1 , ..., x i−1 }), respectively. The attention module and the conditional diffusion model can be pretrained separately and finetuned together or trained end-to-end from scratch with the loss term defined in Eq. 5. The attention module can be pretrained by minimizing the 2 loss between the target image and the conditioning signal c, and the diffusion model can be pretrained with a zero or random-valued tensor as a conditioning signal. However, we have found that training end-to-end from scratch performs better. Our training pipeline is defined in Algorithm 1. Autoregressive sampling Our SADM samples the next-frame image x i given the conditional signals of its previous images, i.e., c i−1 = A θ ({x 1 , ..., x i−1 }). However, real-world data often have missing data or only a single image per subject. Thus, we use an autoregressive sampling scheme that imputes missing images with synthesized images autoregressively. This autoregressive sampling scheme has been shown to improve the generative performance of diffusion models [7]. An overview of the autoregressive sampling scheme is shown in Algorithm 2. Experiments In this section, we show the effectiveness of our proposed SADM in medical image generation on one public 3D longitudinal cardiac MRI dataset and one simulated 3D longitudinal brain MRI dataset. We compare our work with GAN-based [4] and diffusion-based [12] baselines quantitatively and qualitatively. Finally, an ablation study of our model components with various settings for the input sequence is presented. Dataset and Implementation Cardiac dataset. We use the multi-frame cardiac MRI curated by ACDC (Automated Cardiac Diagnosis Challenge) organizers [3]. A common task for cardiac image generation is to synthesize the final frame of a cardiac cycle (i.e., endsystolic or ES), given a starting frame of the cycle (i.e., end-diastolic or ED). The ACDC dataset consists of cardiac MRI from 100 training subjects and 50 testing subjects. We take intermediate frames from ED to ES and resize them to X ∈ R 12×128×128×32 , where each dimension is the length of the frame, the width, the height, and the depth, respectively. Then we Min-Max normalize the dataset subject-wise. Although MRI resizing results in uneven resolution, we opt for this approach, as it is the most reproducible preprocessing method. For training, we randomly select conditioning, missing, and future indices. During inference, we experiment with three settings: 1) Single image, where only the ED frame is given as input; 2) Missing data, where the input sequence has randomly missing frames; and 3) Full sequence, where the input sequence is fully loaded with conditioning images. Brain dataset. Simulating healthy subjects' brain changes over time is essential for understanding human aging [6]. The in-house synthesis of longitudinal brain MRI was carried out in two main steps, using 2,851 subject scans evenly distributed in age. We first divided these subject scans into five age groups (18-30, 31-45, 46-60, 61-74, and 75-97 years old) and generated five age-specific templates following [20]. Then we used a GAN-based registration model to register each subject scan to these five templates, respectively, to simulate the longitudinal images of the same person at different ages [21]. Templates and registered images were divided into ten cross-validation folds. Implementation. We follow the classifier-free diffusion model [9] architecture and hyperparameters, and modify the model into a 3D model. For the transformer, we use the spatial and temporal transformer blocks introduced in ViVit [1] (specifically, Model 3 of ViVit). We trained the model for 3 million iterations, which took about 150 GPU hours using Nvidia V100 32GB GPUs. For inference, we use diffusion time steps of T = 1, 000 and classifier-free guidance of w = 0.1. Comparison with Baseline Methods We choose two state-of-the-art baseline models for comparison: 1) GAN-based model [4], which uses a UNet-based GAN model to synthesize an ES frame given an ED frame; and 2) Diffusion-based model [12], which uses a diffusion model with a deep registration model to register an ED frame into an ES frame. We follow the same training and inference pipeline for cardiac and brain image generation, so we will explain the settings only for a cardiac dataset as follows. For training, SADM uses the intermediate frames between ED and ES, so we augment the baselines with pairs of intermediate frames and its ES frame for a fair comparison. Since these models can only perform single image synthesis (i.e., ED to ES translation), we follow their inference pipeline using only the ED frame as input. We used the source code provided in each respective paper, only modifying the data loader for preprocessing and augmentation. A qualitative comparison is presented in Fig. 4, and a quantitative comparison is presented in Table 1 methods. Also, other areas surrounding the blood pool, such as the myocardium and ventricles, are also synthesized with higher fidelity. For brain image generation, the ventricular regions (blue box) synthesized by SADM are more crisp compared to baselines, and the cortical surface is synthesized more accurately. Then, we perform a quantitative comparison by calculating the structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and normalized rootmean-square deviation (NRMSE) between the target and the synthesized ES frame. Our model outperforms the GAN-based method [4] by 3 ∼ 13% in each metric while slightly outperforming the diffusion-based model [12]. It is worth noting that the diffusion-based baseline uses a source image and a reference image for registration, whereas we only use the source image as input. Although our proposed SADM is capable of working with a single image, it is designed to perform even better with a sequence of images, as shown in the next section. Ablation Study In this section, we perform an ablation study on the components of our model with various settings for the input sequences. First, we experiment with different settings for the input sequence defined in the first paragraph of Section 4.1, i.e., single image, missing data, and full sequence settings. As presented in Fig. 5, synthesis using the full sequence and missing data settings show a higher SSIM compared to a single input setting. Also, as observed by the high peak in SSIM for frames in the vicinity of conditioning frames, our SADM is learning which frames of the input sequence are important in generating future frames, i.e., the sequential dependency. Next, we perform an ablation study by removing either the attention module or the diffusion model. The diffusion-only model can be trained with the raw pixel values of the sequential image as conditioning signals, and the attention-only model can be trained by minimizing the 2 loss between the target image and the output of the transformer. As shown in Table 2, evidently, the attention-only module has the worst performance as the transformers are not designed for image generation (typically due to flattening operations [22]). The diffusion-only model performs on par with GAN-based baseline [4], but it is unable to learn the sequential dependency, as observed by the minimal performance increase in full sequence setting compared to single input settings. Conclusion To this end, we propose a sequence-aware diffusion model for the generation of longitudinal medical images. Specifically, our model consists of a transformerbased attention module that can learn the sequential or temporal dependence of longitudinal data input and a diffusion model that can synthesize high-fidelity medical images. We tested our proposed SADM on longitudinal cardiac and brain MRI generation and presented state-of-the-art performance quantitatively and qualitatively. Our approach to learning the temporal dependence of sequential data and using it as a prior in diffusion models is an exciting new research topic in the field of medical image generation. However, the limitations of our model's computational efficiency for large medical datasets suggest that further work is needed to improve sampling efficiency. We hope our research inspires researchers to pursue this newly found topic and find solutions to these challenges.
2022-12-19T06:42:11.432Z
2022-12-16T00:00:00.000
{ "year": 2022, "sha1": "5466a6fa708db56300e62c1a9e261f4948800a4d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5466a6fa708db56300e62c1a9e261f4948800a4d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259661440
pes2o/s2orc
v3-fos-license
Investigation on Vibration Characteristics of Thin-Walled Steel Structures under Shock Waves Thin-walled steel structures, prized for their lightweight properties, material efficiency, and excellent mechanical characteristics, find wide-ranging applications in ships, aircraft, and vehicles. Given their typical role in various types of equipment, it is crucial to investigate the response of thin-walled structures to shock waves for the design and development of innovative equipment. In this study, a shock tube was employed to generate shock waves, and a rectangular steel plate with dimensions of 2400.0 mm × 1200.0 mm × 4.0 mm (length × width × thickness) was designed for conducting research on transient shock vibration. The steel plate was mounted on an adjustable bracket capable of moving vertically. Accelerometers were installed on the transverse and longitudinal symmetric axes of the steel plate. Transient shock loading was achieved at nine discrete positions on a steel plate by adjusting the horizontal position of the shock tube and the vertical position of the adjustable bracket. For each test, vibration data of eight different test positions were obtained. The wavelet transform (WT) and the improved ensemble empirical mode decomposition (EEMD) methods were introduced to perform a time-frequency analysis on the vibration of the steel plate. The results indicated that the EEMD method effectively alleviated the modal aliasing in the vibration response decomposition of thin-walled structures, as well as the incompletely continuous frequency domain issue in WT. Moreover, the duration of vibration at different frequencies and the variation of amplitude size with time under various shock conditions were determined for thin-walled structures. These findings offer valuable insights for the design and development of vehicles with enhanced resistance to shock wave loading. Introduction Thin-walled steel structures are commonly utilized in various applications such as ships, aircraft, and vehicles due to their lightweight nature, material efficiency, and favorable mechanical properties [1,2]. These structures are often subjected to transient shock loads during operation in challenging environments. As a result, the dynamic response characteristics of thin-walled structures under transient shock loads have received extensive attention from researchers and engineers in related fields [3,4]. The investigation of the dynamic response of thin-walled structures has predominantly focused on two distinct modes of failure. Specifically, the response of these structures to shock loads is generally characterized by either large plastic deformation [5][6][7][8] or tensile tearing [9,10]. For the study of large plastic deformation of thin-walled structures, Wang et al. [11] conducted experimental and numerical simulation studies on the response of free thin steel plates under intense loads and proposed a theoretical model to describe the deformation of free metal thin plates under powerful loading. They found that when the loading parameters are determined, the deformation velocity and deflection of the plate are only related to the width-to-thickness ratio of the plate. Xu et al. [12] presents experimental and numerical investigations on the response of thin aluminum plates to shock loading, reveals the relationship between the deformation region of counterintuitive behavior (CIB) and loading and geometric parameters of the structure, and derives a relationship between normalized duration and charge mass to predict the occurrence of CIB. Kaufmann et al. [13] introduced the Virtual Fields Method (VFM) to reconstruct surface pressures on thin steel plates by measuring full-field deformation of plate dynamics. A shock wave similar to an explosion is generated by a shock tube, demonstrating the crucial role of VFM modeling in predicting large plastic deformations of structures. Curry et al. [14] investigated the impact of different charge backing types on the plastic deformation of thin-walled steel plates using a combination of experimental and computational methods. The findings suggest that metal-backed charges increase impulse transfer by 3-5 times compared to air-backed charges. While the permanent deflection was greater with metal backing, the degree of increase was less pronounced compared to that of the impulse transfer. Kim et al. [15] studied the influence of the relative position of explosives and thin steel plates on the plate damage and found that tearing damage can be reduced by optimizing the inclination angle of the thin wall. Yao et al. [16] conducted a large-scale experimental study on the damage characteristics and dynamic response of thin-walled multi-steel box structures under restrained high-intensity loads and found that strengthening the corner of the box can prevent tearing of the thin-walled structure. McDonald et al. [17] examined the response of four high-strength thin-walled structures to localized blast loading. The findings indicate that higher strength steels and tailored microstructures provide enhanced rupture resistance. A new non-dimensional impulse correction parameter is introduced for assessing the impact of charge stand-off on deformation and rupture performance. As the level of intelligence and informationization of vehicles increases, electronic and mechanical equipment becomes increasingly important in the carriers [18]. In the face of enemy attacks, even if the structure remains intact, strong vibrations can damage electronic and mechanical devices, thus affecting weapon and equipment effectiveness [19]. Research on the vibration characteristics of thin-walled structures mainly focuses on numerical simulations [20][21][22] and theoretical analysis [23][24][25]. In terms of numerical simulations, Park et al. [26] conduct a numerical investigation to evaluate the effectiveness of thin-walled panels in attenuating vibrational damage induced by explosive events. Their findings reveal that the implementation of blast-resistant panels significantly mitigates the propagation of acceleration, with the most favorable outcomes achieved through the utilization of thicker panels and lower explosion loads. In a separate study, Wu et al. [27] inspect the vibrational response of subterranean thin-walled structures subjected to surface blast loads by employing numerical simulations. The researchers observe an increase in peak velocity as the proximity to the explosion source decreases, with a predominance of vertical vibrations. Moreover, the study introduces a predictive model for damage assessment and delineates critical thresholds associated with distinct damage levels. Wu et al. [28] use numerical simulations to propose a data-driven approach for designing distributed Dynamic Vibration Absorbers (DVAs) to mitigate vibrations in thin-walled structures with tight modal spacing. Leveraging Singular Value Decomposition (SVD) on structural response data, without needing excitation or structural mode info, optimal DVA placement and parameters are determined. This method surpasses traditional techniques, showcasing its robustness and effectiveness on a simply supported square plate and a fairing, achieving broad-band vibration suppression utilizing only structural response data. For theoretical studies, Pandey et al. [29] investigates the transient vibroacoustic response of functionally graded sandwich plates with varying thickness ratios and material gradation. A parametric study is conducted to investigate the influence of volume fraction index and thickness ratio on the transient vibroacoustic response. Vieira et al. [30] introduced a new high-order beam model for thin-walled structure response analysis, which considers the three-dimensional displacement characteristics of the thin-walled structure and the in-plane bending characteristics of the section. This model effectively analyzes local and global buckling phenomena of thin-walled structures under high-order modes. Xu et al. [31] found that the improved beam theory based on the Carrera Unified Formulation has higher reliability and accuracy in predicting the modal behavior of the structure than the classical beam theory. The complex interactions between shock waves and structures lead to substantial analytical challenges, making it difficult for theoretical analysis to effectively address such complexities [32,33]. With these factors in mind, the importance of experimental research must be highlighted, as it offers valuable insights and enables accurate validation and refinement of theoretical results [34,35]. In this work, an experimental device comprising a shock tube system and a steel plate fixed onto an adjustable support bracket is proposed. The device enables the application of transient shock loads at various positions along a thin-walled structure, facilitating the measurement of its vibration response under shock loading conditions. By conducting transient loading at nine different positions on the structure and using accelerometers at key positions, the transient shock vibration response characteristics were obtained. To address the frequency domain discontinuity problem inherent to traditional wavelet transformation methods, this study proposes a novel empirical mode decomposition method grounded in piecewise cubic Hermite interpolation (PCHIP), an approach that ensures monotonicity and circumvents the "over envelope" and "under envelope" fitting issues commonly associated with conventional interpolation methods, thereby enhancing the analysis of the vibration response of thin-walled structures under varying shock positions. Experimental Setups In this study, transient loads were generated using a shock tube system, as illustrated in Figure 1a. The shock tube, with an inner diameter of 90.0 mm, was partitioned into highpressure and low-pressure sections, separated by a 0.5 mm-thick aluminum diaphragm with a 0.3 mm cross-shaped scratch. High-pressure nitrogen served as the driving gas in the highpressure section, while the low-pressure section was initially connected to the atmosphere. When the pressure of the high-pressure section reaches about 1300.0 kPa and the lowpressure section maintains atmospheric pressure, the diaphragm ruptured rapidly along the scratch, creating a shock wave in the low-pressure section. A pressure sensor (Kistler 211B4) was installed at the shock tube opening to measure the pressure generated by the shock wave from the tube. In order to investigate the shock response of thin-walled structures, an adjustable bracket was designed and installed. The bracket featured an adjustable track that allowed for easy modification of the shock wave position by adjusting the plate's vertical and horizontal positions. A steel plate (measuring 2400.0 mm × 1200.0 mm × 4.0 mm in length, width, and thickness) was mounted onto the bracket. This steel plate is made from Q235 steel, with a density of 7.85 g/cm 3 , a minimum yield strength of 235 MPa, and a Young's modulus of 210 GPa. The plate's response to shock wave loading was analyzed using eight Kistler 8776B100A accelerometers, which were symmetrically positioned along the horizontal and vertical axes of the steel plate. Acceleration data was obtained by processing the voltage signals captured by these sensors with a Kistler TraNET 408DP data acquisition device. Figure 1b shows the arrangement of the accelerometers, with six (RM1-RM6) placed on the plate's horizontal axis and two (CM1, CM2) on the thin plate's horizontal axes. The first transverse sensor was placed 1200.0 mm to the left of the steel plate's center point, with subsequent sensors positioned every 200.0 mm to the right. Sensors CM1 and CM2, located on the vertical axis, were situated 200.0 mm and 500.0 mm above the center point, respectively. This setup was used to determine the structure's vibration characteristics. All nine shock positions were located on the right side of the thin plate. Shock positions M1, M2, and M3 were positioned on the horizontal axis of symmetry of the thin plate, lying 200.0 mm, 500.0 mm, and 800.0 mm to the right of the plate's center, respectively. Above these three shock loading points, shock positions H1, H2, and H3 were found at a height of 200.0 mm on the thin plate's horizontal axis of symmetry. Below these points, loading positions L1, L2, and L3 were situated at a height of 100.0 mm. Repetition Verification To ensure the reproducibility of shock wave loading, a repeatability verification experiment was conducted using a shock tube apparatus. Figure 2 depicts the pressure-time history curves obtained by a pressure sensor at the shock tube nozzle in two separate experiments. The initial peak pressures of the shock waves in the two tests were 129 kPa and 128 kPa, respectively, exhibiting a deviation of 0.7%. These results indicate that the shock wave transient load system used in the experiments demonstrates high consistency. Repetition Verification To ensure the reproducibility of shock wave loading, a repeatability verification experiment was conducted using a shock tube apparatus. Figure 2 depicts the pressure-time history curves obtained by a pressure sensor at the shock tube nozzle in two separate experiments. The initial peak pressures of the shock waves in the two tests were 129 kPa and 128 kPa, respectively, exhibiting a deviation of 0.7%. These results indicate that the shock wave transient load system used in the experiments demonstrates high consistency. Wavelet Transform Analysis Method Wavelet Transform (WT) can show the detailed information of signals in time-fre- , dt where a, b ∈ R, and α ≠ 0 is called the scaling factor, and b is called the translation factor. represents the complex conjugate of the basis wavelet. The role of WT is to transform one-dimensional impulse response signal data into a two-dimensional matrix, where each row represents the wavelet coefficients at different decomposition scales, and each column represents the different time of the impulse response signal data. Since the WT is discontinuous in the frequency domain, it is not feasible to achieve a fully continuous wavelet transform of signal [36]. To accurately capture the frequency domain characteristics of the structure, we have developed a novel method, namely, the improved ensemble empirical mode decomposition method. EEMD-HHT Analysis Method Hilbert-Huang transform (HHT) transform is a time-frequency localization analysis method with strong adaptability, which is suitable for processing and analyzing non-stationary signals. HHT transform consists of EMD decomposition and Hilbert transform, but EMD decomposition has the problem of mode aliasing, that is, the components of different frequency bands in the signal cannot be effectively separated. Wu et al. [37] proposed EEMD decomposition, which can prevent the diffusion of low-frequency modal components by adding white noise and alleviate modal aliasing. After the signal is decomposed by EEMD, the combination of multiple IMF components can be observed, and the analytical signal can be observed by Hilbert transform of the IMF component c(t) of the original signal. After EEMD decomposition of the signal, a group of IMF components can be obtained. Hilbert transform is performed on each order of IMF components to construct the Analytic signal z(t): Wavelet Transform Analysis Method Wavelet Transform (WT) can show the detailed information of signals in time-frequency domain. The wavelet transform W ψ f (a, b) of a signal f (t) is defined as: where a, b ∈ R, and α = 0 is called the scaling factor, and b is called the translation factor. represents a series of basis wavelets determined by a and b, and ψ * t−b a represents the complex conjugate of the basis wavelet. The role of WT is to transform one-dimensional impulse response signal data into a two-dimensional matrix, where each row represents the wavelet coefficients at different decomposition scales, and each column represents the different time of the impulse response signal data. Since the WT is discontinuous in the frequency domain, it is not feasible to achieve a fully continuous wavelet transform of signal [36]. To accurately capture the frequency domain characteristics of the structure, we have developed a novel method, namely, the improved ensemble empirical mode decomposition method. EEMD-HHT Analysis Method Hilbert-Huang transform (HHT) transform is a time-frequency localization analysis method with strong adaptability, which is suitable for processing and analyzing nonstationary signals. HHT transform consists of EMD decomposition and Hilbert transform, but EMD decomposition has the problem of mode aliasing, that is, the components of different frequency bands in the signal cannot be effectively separated. Wu et al. [37] proposed EEMD decomposition, which can prevent the diffusion of low-frequency modal components by adding white noise and alleviate modal aliasing. After the signal is decomposed by EEMD, the combination of multiple IMF components can be observed, and the analytical signal can be observed by Hilbert transform of the IMF component c(t) of the original signal. After EEMD decomposition of the signal, a group of IMF components can be obtained. Hilbert transform is performed on each order of IMF components to construct the Analytic signal z(t): H[c(t)] is the IMF component after the Hilbert transformation, and j is an imaginary unit. The instantaneous amplitude a(t) corresponding to the analytical signal can be obtained: Improved Decomposition Method In the ensemble empirical mode decomposition (EEMD) method, accurate envelope fitting is essential for signal decomposition. However, traditional cubic spline interpolation methods for fitting extreme points often lead to "over envelope" and "under envelope" issues, which can significantly impact the accuracy of decomposition. In this work, an EEMD method based on piecewise cubic Hermite interpolation (Pchip interpolation) was employed, which effectively resolves the fitting issues encountered in traditional methods. To decompose the signal, we used a low-pass filter to isolate the target frequency band and applied a set empirical mode decomposition based on cubic Hermite interpolation, Figure 3 shows the pchip-EEMD transformation process of real signals. H[c(t)] is the IMF component after the Hilbert transformation, and j is an imaginary unit. The instantaneous amplitude a(t) corresponding to the analytical signal can be obtained: (3) Improved Decomposition Method In the ensemble empirical mode decomposition (EEMD) method, accurate envelope fitting is essential for signal decomposition. However, traditional cubic spline interpolation methods for fitting extreme points often lead to "over envelope" and "under envelope" issues, which can significantly impact the accuracy of decomposition. In this work, an EEMD method based on piecewise cubic Hermite interpolation (Pchip interpolation) was employed, which effectively resolves the fitting issues encountered in traditional methods. To decompose the signal, we used a low-pass filter to isolate the target frequency band and applied a set empirical mode decomposition based on cubic Hermite interpolation, Figure 3 shows the pchip-EEMD transformation process of real signals. We then compared the decomposition results obtained using this approach with those obtained from EMD decomposition under identical parameters. As illustrated in Figure 4, the IMF components derived from decomposition show a relatively concentrated center frequency in each order, effectively reducing mode aliasing phenomenon. We then compared the decomposition results obtained using this approach with those obtained from EMD decomposition under identical parameters. As illustrated in Figure 4, the IMF components derived from decomposition show a relatively concentrated center frequency in each order, effectively reducing mode aliasing phenomenon. (a) EMD decomposition effect (b) Improve the decomposition effect of EEMD To validate the effectiveness of the Pchip-EEMD decomposition, Figure 5 provides a comparison between the improved EEMD decomposition results and the power spectral density estimates of different IMF components obtained via the original EMD decomposition. As illustrated in the figure, the frequency bands associated with each IMF component in the improved decomposition method are less prone to aliasing. Conversely, the EMD decomposition results reveal that most of the frequency bands in the signal are aliased in the IMF1 component. This comparison demonstrates that the Pchip-EEMD method effectively mitigates the mode aliasing phenomenon observed in the original decomposition method. (a) EMD decomposition (b) Improved EEMD decomposition To validate the effectiveness of the Pchip-EEMD decomposition, Figure 5 provides a comparison between the improved EEMD decomposition results and the power spectral density estimates of different IMF components obtained via the original EMD decomposition. As illustrated in the figure, the frequency bands associated with each IMF component in the improved decomposition method are less prone to aliasing. Conversely, the EMD decomposition results reveal that most of the frequency bands in the signal are aliased in the IMF1 component. This comparison demonstrates that the Pchip-EEMD method effectively mitigates the mode aliasing phenomenon observed in the original decomposition method. We then compared the decomposition results obtained using this approach with those obtained from EMD decomposition under identical parameters. As illustrated in Figure 4, the IMF components derived from decomposition show a relatively concentrated center frequency in each order, effectively reducing mode aliasing phenomenon. (a) EMD decomposition effect (b) Improve the decomposition effect of EEMD To validate the effectiveness of the Pchip-EEMD decomposition, Figure 5 provides a comparison between the improved EEMD decomposition results and the power spectral density estimates of different IMF components obtained via the original EMD decomposition. As illustrated in the figure, the frequency bands associated with each IMF component in the improved decomposition method are less prone to aliasing. Conversely, the EMD decomposition results reveal that most of the frequency bands in the signal are aliased in the IMF1 component. This comparison demonstrates that the Pchip-EEMD method effectively mitigates the mode aliasing phenomenon observed in the original decomposition method. (a) EMD decomposition (b) Improved EEMD decomposition Figure 6 illustrates the wavelet coefficient diagrams of vibration signals captured from different measuring points under the H1 shock condition, which is close to the vertical axis of symmetry of the steel plate. The diagrams of Figure 6a-f correspond to six transverse sensors RM1-RM6, while Figure 6g,h correspond to two longitudinal sensors CM1 and CM2. At the beginning of the loading process, the structure exhibited high-frequency vibration with a relatively high amplitude, which gradually transformed over time. In this context, f L represents the frequency with the longest vibration duration, and T L denotes the duration of that frequency. Measuring point RM1, which is close to the left boundary, showed no lower frequency vibration below 1000 Hz, and the high-frequency vibration duration was significantly shorter than that of other measuring points. The duration of higher frequency vibration was generally shorter than that of lower-frequency vibration, with an inverse correlation between the frequency and duration. At RM1, f L = 2670 Hz, while for RM2-RM6, f L was concentrated between 100-250 Hz with a T L = 500 ms. For CM1, f L = 546 Hz, and the vibration duration below 50 Hz decreased with a decrease in frequency. At CM2, f L = 117 Hz, with a T L = 470 ms. Figure 6 illustrates the wavelet coefficient diagrams of vibration signals capture from different measuring points under the H1 shock condition, which is close to the ve tical axis of symmetry of the steel plate. The diagrams of Figure 6a-f correspond to s transverse sensors RM1-RM6, while Figure 6g,h correspond to two longitudinal senso CM1 and CM2. At the beginning of the loading process, the structure exhibited high-fr quency vibration with a relatively high amplitude, which gradually transformed ove time. In this context, fL represents the frequency with the longest vibration duration, an TL denotes the duration of that frequency. Measuring point RM1, which is close to the le boundary, showed no lower frequency vibration below 1000 Hz, and the high-frequenc vibration duration was significantly shorter than that of other measuring points. The du ration of higher frequency vibration was generally shorter than that of lower-frequenc vibration, with an inverse correlation between the frequency and duration. At RM1, fL 2670 Hz, while for RM2-RM6, fL was concentrated between 100-250 Hz with a TL = 500 m For CM1, fL = 546 Hz, and the vibration duration below 50 Hz decreased with a decreas in frequency. At CM2, fL = 117 Hz, with a TL = 470 ms. The vibration at measurement points RM2, RM4, RM5, and RM6 within the plate exhibits the same f L as under the H1 shock condition, while at measurement points RM3 and CM2, T L occurs at f L = 99 Hz, which is lower than that of the H1 shock condition (as depicted in Figure 7c,i). Shock Position of H3 For the H3 shock condition, located to the right of the H1 and H2 conditions, the wavelet coefficient diagram in Figure 8 reveals that the vibration signals at each measuring point have shorter durations across all frequency ranges compared to the H1 and H2 conditions. When the loading is near the edge, the overall amplitude of the plate vibration is smaller, leading to shorter durations of vibration. As shown in Figure 8c, the low-frequency vibration duration and amplitude at measuring point RM3, which is equidistant from the vertical symmetry axis as the loading position, are significantly enhanced compared to the H1 and H2 conditions. At measuring point RM1, f L = 1228 Hz, with a T L = 64 ms, as shown in Figure 8a. The vibration duration at RM1 is consistently shorter than those of other measuring points across all shock conditions (H1, H2, and H3). Shock Position of H3 For the H3 shock condition, located to the right of the H1 and H2 conditions, th wavelet coefficient diagram in Figure 8 reveals that the vibration signals at each measurin point have shorter durations across all frequency ranges compared to the H1 and H2 con ditions. When the loading is near the edge, the overall amplitude of the plate vibration smaller, leading to shorter durations of vibration. As shown in Figure 8c, the low-fr quency vibration duration and amplitude at measuring point RM3, which is equidistan from the vertical symmetry axis as the loading position, are significantly enhanced com pared to the H1 and H2 conditions. At measuring point RM1, fL = 1228 Hz, with a TL = 6 ms, as shown in Figure 8a. The vibration duration at RM1 is consistently shorter than thos of other measuring points across all shock conditions (H1, H2, and H3). Shock Position of H3 For the H3 shock condition, located to the right of the H1 and H2 conditions, th wavelet coefficient diagram in Figure 8 reveals that the vibration signals at each measurin point have shorter durations across all frequency ranges compared to the H1 and H2 co ditions. When the loading is near the edge, the overall amplitude of the plate vibration smaller, leading to shorter durations of vibration. As shown in Figure 8c, the low-fr quency vibration duration and amplitude at measuring point RM3, which is equidista from the vertical symmetry axis as the loading position, are significantly enhanced com pared to the H1 and H2 conditions. At measuring point RM1, fL = 1228 Hz, with a TL = ms, as shown in Figure 8a. The vibration duration at RM1 is consistently shorter than tho of other measuring points across all shock conditions (H1, H2, and H3). Shock Position of M1 Under the M1 shock condition, the vibration characteristics of the structure change when the shock is located on the horizontal axis of symmetry of the steel plate. Figure 9 presents the wavelet coefficient diagram of the vibration signals at each measuring point. Due to the loading shock position being close to the center of the structure, the amplitude and duration of the vibration signals at each point are higher, as shown in Figure 9b,c,e,f. The duration of the f L at measuring points RM2, RM3, RM5, and RM6 is significantly longer than that of the H1-H3 conditions, where the loading position is above the axis of symmetry. Specifically, at measuring point RM2, the T L = 790 ms of the f L = 117 Hz vibration signal is 32% higher than the average duration under H1-H3 conditions with the same frequency. Measuring point RM3 exhibits a f L = 202 Hz vibration signal with a T L = 550 ms, while measuring point RM4 shows a relatively small f L = 117 Hz vibration frequency with a T L = 305 ms compared to other conditions. At measuring point RM5, the T L = 800 ms of the f L = 187 Hz vibration signal is significantly longer than that of other shock conditions. Moreover, as shown in Figure 9f, the 100-300 Hz vibration at measuring point RM6 has a higher initial amplitude due to its proximity to the loading position, and the f L = 109 Hz vibration signal at this point has a T L = 830 ms. As shown in Figure 9h, the vibration signal with the longest duration (T L = 310 ms) at measuring point CM1, which is near the upper boundary of the plate, has a frequency of f L = 378 Hz and is significantly higher than that of other measuring points under this shock condition. when the shock is located on the horizontal axis of symmetry of the steel plate. Figure presents the wavelet coefficient diagram of the vibration signals at each measuring poin Due to the loading shock position being close to the center of the structure, the amplitud and duration of the vibration signals at each point are higher, as shown in Figure 9b,c,e The duration of the fL at measuring points RM2, RM3, RM5, and RM6 is significant longer than that of the H1-H3 conditions, where the loading position is above the axis o symmetry. Specifically, at measuring point RM2, the TL = 790 ms of the fL = 117 Hz vibr tion signal is 32% higher than the average duration under H1-H3 conditions with th same frequency. Measuring point RM3 exhibits a fL = 202 Hz vibration signal with a TL 550 ms, while measuring point RM4 shows a relatively small fL = 117 Hz vibration fr quency with a TL = 305 ms compared to other conditions. At measuring point RM5, the T = 800 ms of the fL = 187 Hz vibration signal is significantly longer than that of other shoc conditions. Moreover, as shown in Figure 9f, the 100-300 Hz vibration at measuring poin RM6 has a higher initial amplitude due to its proximity to the loading position, and the = 109 Hz vibration signal at this point has a TL = 830 ms. As shown in Figure 9h, the vibr tion signal with the longest duration (TL = 310 ms) at measuring point CM1, which is nea the upper boundary of the plate, has a frequency of fL = 378 Hz and is significantly highe than that of other measuring points under this shock condition. Shock Position of M2 In the M2 shock condition, for RM1, fL = 12,053 Hz, similar to the M1 condition. Th vibration at fL = 138 Hz in measuring points RM2, RM4, and RM6 has the longest duratio and higher amplitude compared to other frequencies under this condition. For RM5, fL 85 Hz, with a TL of 660 ms as shown in Figure 10e. At CM1, which is closer to the uppe side boundary, fL = 566 Hz, as shown in Figure 10g, and this frequency is significant Shock Position of M2 In the M2 shock condition, for RM1, f L = 12,053 Hz, similar to the M1 condition. The vibration at f L = 138 Hz in measuring points RM2, RM4, and RM6 has the longest duration and higher amplitude compared to other frequencies under this condition. For RM5, f L = 85 Hz, with a T L of 660 ms as shown in Figure 10e. At CM1, which is closer to the upper side boundary, f L = 566 Hz, as shown in Figure 10g, and this frequency is significantly higher than other measuring points except RM1. At CM2, T L = 470 ms and f L = 69 Hz, which is lower than the frequency with the longest duration under other conditions. Shock Position of M3 The vibration duration of frequencies above 250 Hz is prolonged at each measuring point under the M3 shock condition when the shock position is near the right boundary and located on the horizontal symmetry axis of the steel plate. The curve in Figure 11 represents the vibration duration of different frequencies at each measuring point under the M3 shock condition. Compared with the previous 100-250 Hz frequency band, the f L at measuring points RM2, RM4, RM5, and RM6 has increased to 500 Hz or above under the M3 shock condition. Under these three shock conditions with the shock position located on the horizontal symmetry axis, the f L at measuring point RM1 is consistently above 12,000 Hz. For the M3 shock conditions, the duration of vibrations above 250 Hz in the structure significantly increases, and the amplitude is also enhanced. Taking the third-layer IMF component with a center frequency of around 1000 Hz at the RM6 measurement point as an example, the average instantaneous amplitude of vibrations in the M3 condition is 0.93 g, which is significantly higher than that in the M2 condition at 0.81 g. Shock Position of M3 The vibration duration of frequencies above 250 Hz is prolonged at each measurin point under the M3 shock condition when the shock position is near the right boundar and located on the horizontal symmetry axis of the steel plate. The curve in Figure 1 represents the vibration duration of different frequencies at each measuring point unde the M3 shock condition. Compared with the previous 100-250 Hz frequency band, the at measuring points RM2, RM4, RM5, and RM6 has increased to 500 Hz or above unde the M3 shock condition. Under these three shock conditions with the shock position lo cated on the horizontal symmetry axis, the fL at measuring point RM1 is consistently abov 12,000 Hz. For the M3 shock conditions, the duration of vibrations above 250 Hz in th structure significantly increases, and the amplitude is also enhanced. Taking the third layer IMF component with a center frequency of around 1000 Hz at the RM6 measuremen point as an example, the average instantaneous amplitude of vibrations in the M3 cond tion is 0.93 g, which is significantly higher than that in the M2 condition at 0.81 g. Shock Position of L1 The vibration duration and frequency distribution of the structure under the L shock condition, as shown in Figure 12, are similar to those under the H1-H3 and M1, M shock conditions. However, one distinct difference is that the vibration energy at eac measurement point stays focused within specific frequency ranges for extended dura tions. Measurement points RM2, RM4, and RM6 exhibit significant vibration at fL = 110 H and 138 Hz for a TL of over 800 ms, while measurement points RM3 and RM5 show con centrated vibration at fL = 186 Hz. Moreover, measurement point CM1 displays pro nounced vibration at fL = 546 Hz. Shock Position of L1 The vibration duration and frequency distribution of the structure under the L1 shock condition, as shown in Figure 12, are similar to those under the H1-H3 and M1, M2 shock conditions. However, one distinct difference is that the vibration energy at each measurement point stays focused within specific frequency ranges for extended durations. Measurement points RM2, RM4, and RM6 exhibit significant vibration at f L = 110 Hz and 138 Hz for a T L of over 800 ms, while measurement points RM3 and RM5 show concentrated vibration at f L = 186 Hz. Moreover, measurement point CM1 displays pronounced vibration at f L = 546 Hz. The vibration duration and frequency distribution of the structure under the L shock condition, as shown in Figure 12, are similar to those under the H1-H3 and M1, M shock conditions. However, one distinct difference is that the vibration energy at each measurement point stays focused within specific frequency ranges for extended dura tions. Measurement points RM2, RM4, and RM6 exhibit significant vibration at fL = 110 H and 138 Hz for a TL of over 800 ms, while measurement points RM3 and RM5 show con centrated vibration at fL = 186 Hz. Moreover, measurement point CM1 displays pro nounced vibration at fL = 546 Hz. Figure 13 demonstrates that in the L2 condition, where the shock position is to the right of the vertical symmetry axis, the vibration patterns observed at each measurement point are similar to those in the L1 condition. However, in comparison to the L1 condition, the amplitudes of the vibrations at each measurement point are generally reduced. The vibration energy is concentrated in a few fixed frequencies, and in some measurement points, the T L is longer than that observed in the L1 condition. At measurement points RM4 and RM6, the vibrations predominantly occur at f L = 138 Hz, with a T L of approximately 900 ms. At measurement points RM2, RM3, and CM2, the vibration energy is concentrated at f L = 215 Hz, with a T L of around 800 ms. At measurement point RM3, which is close to the vertical symmetry axis, the amplitude of the vibrations around 200 Hz abnormally increases, and the T L is significantly longer than that observed in the L1 condition. Finally, at measurement point CM1, which is close to the upper boundary, the vibrations are concentrated at f L = 638 Hz, with a slightly longer T L than that observed in the L1 condition. Figure 13 demonstrates that in the L2 condition, where the shock position is to the right of the vertical symmetry axis, the vibration patterns observed at each measurement point are similar to those in the L1 condition. However, in comparison to the L1 condition, the amplitudes of the vibrations at each measurement point are generally reduced. The vibration energy is concentrated in a few fixed frequencies, and in some measurement points, the TL is longer than that observed in the L1 condition. At measurement points RM4 and RM6, the vibrations predominantly occur at fL = 138 Hz, with a TL of approximately 900 ms. At measurement points RM2, RM3, and CM2, the vibration energy is concentrated at fL = 215 Hz, with a TL of around 800 ms. At measurement point RM3, which is close to the vertical symmetry axis, the amplitude of the vibrations around 200 Hz abnormally increases, and the TL is significantly longer than that observed in the L1 condition. Finally, at measurement point CM1, which is close to the upper boundary, the vibrations are concentrated at fL = 638 Hz, with a slightly longer TL than that observed in the L1 condition. In the case of the L3 shock position, which is situated closest to the lower right corner of the steel plate, as depicted in Figure 14, the amplitude of each frequency at various Shock Position of L3 In the case of the L3 shock position, which is situated closest to the lower right corner of the steel plate, as depicted in Figure 14, the amplitude of each frequency at various measurement points throughout the structure has exhibited an overall reduction, along with a shortened T L compared to the L1 and L2 shock conditions. Specifically, the maximum instantaneous amplitude of the fifth-level intrinsic mode function (IMF) at measurement point RM6 under L2 and L3 shock conditions has been reduced by 28% and 47%, respectively, in comparison to the L1 condition. The frequencies that exhibit the longest T L at measurement points RM2, RM3, RM5, and RM6 are all f L = 186 Hz, with the duration spanning from 445 ms to 892 ms. For the measurement points RM3 and RM4, which are situated nearer to the vertical symmetric position from the loading position, the amplitude below 200 Hz has been higher and the T L has been longer, particularly for measurement point RM3. Figure 15 depicts the vibration characteristics observed at measuring point RM1. the figure, the bar chart represents the fL at measuring point RM1 under different loadin conditions: H-fL (above horizontal symmetry axis), M-fL (on horizontal symmetry axis), fL (below horizontal symmetry axis), while the line chart represents the TL at measurin point RM1 under different loading conditions. It can be observed that for loading po tions closer to the horizontal axis symmetry, the fL in measuring point RM1 is above 12,0 Hz in M1-M3 cases. However, for H1-H3 and L1-L3 cases, the fL is between 2000-40 Hz, and for the loading cases closer to the right boundary, the frequency is relative smaller. Let us define the longest duration frequency as fL. The TL changes of differe cases have no obvious regularity, but they are all within 100 ms. , L-f L (below horizontal symmetry axis), while the line chart represents the T L at measuring point RM1 under different loading conditions. It can be observed that for loading positions closer to the horizontal axis symmetry, the f L in measuring point RM1 is above 12,000 Hz in M1-M3 cases. However, for H1-H3 and L1-L3 cases, the f L is between 2000-4000 Hz, and for the loading cases closer to the right boundary, the frequency is relatively smaller. Let us define the longest duration frequency as f L . The T L changes of different cases have no obvious regularity, but they are all within 100 ms. Effect of Shock Position on Vibration Characteristics of Steel Plates In Figure 16, a bar chart is presented that displays the f L of vibration at measuring points RM2-RM6 for various loading cases. Compared to measuring point RM1, which is close to the boundary, these measuring points show a significant reduction in f L , making them unsuitable for inclusion in the same bar chart. Notably, the results reveal that in the M3 case, except for measuring point RM3, which is symmetric to the loading position, the f L detected by all other measuring points are significantly higher. Additionally, in both the H3 and M3 cases, measurement point RM3 exhibits a significantly lower vibration frequency compared to other measurement points at the same horizontal position, given that RM3 is located at an equal distance from the vertical axis of symmetry as the loading position. It is worth noting that for most shock conditions investigated, the frequencies with the longest vibration duration (fL) at measuring points RM3 and RM5 are significantly higher compared to other measurement points. Figure 15 depicts the vibration characteristics observed at measuring point RM1. In the figure, the bar chart represents the fL at measuring point RM1 under different loading conditions: H-fL (above horizontal symmetry axis), M-fL (on horizontal symmetry axis), L-fL (below horizontal symmetry axis), while the line chart represents the TL at measuring point RM1 under different loading conditions. It can be observed that for loading positions closer to the horizontal axis symmetry, the fL in measuring point RM1 is above 12,000 Hz in M1-M3 cases. However, for H1-H3 and L1-L3 cases, the fL is between 2000-4000 Hz, and for the loading cases closer to the right boundary, the frequency is relatively smaller. Let us define the longest duration frequency as fL. The TL changes of different cases have no obvious regularity, but they are all within 100 ms. In Figure 16, a bar chart is presented that displays the fL of vibration at measuring points RM2-RM6 for various loading cases. Compared to measuring point RM1, which is close to the boundary, these measuring points show a significant reduction in fL, making them unsuitable for inclusion in the same bar chart. Notably, the results reveal that in the M3 case, except for measuring point RM3, which is symmetric to the loading position, the fL detected by all other measuring points are significantly higher. Additionally, in both the H3 and M3 cases, measurement point RM3 exhibits a significantly lower vibration frequency compared to other measurement points at the same horizontal position, given that RM3 is located at an equal distance from the vertical axis of symmetry as the loading position. It is worth noting that for most shock conditions investigated, the frequencies with the longest vibration duration (fL) at measuring points RM3 and RM5 are significantly higher compared to other measurement points. Conclusions This paper presents an experimental device consisting of a shock tube system and an adjustable steel plate, designed to investigate the transient shock vibration response of thin-walled structures. By applying shock loads at various positions and using accelerometers to collect data, a comprehensive understanding of the vibration response characteristics has been achieved. To analyze the vibration signals in the time-frequency domain, the research introduced two methods, the wavelet transform and the EEMD with pchip interpolation. The accuracy of these methods is validated, and the results provide information on the distribution and attenuation of vibration energy across various frequency bands. The study reveals several findings, which can be summarized as follows: 1. Improved Signal Analysis: We have refined time-frequency domain analysis by using the Ensemble Empirical Mode Decomposition (EEMD) method with pchip interpolation. This significantly enhances the accuracy, tackling mode mixing issues in traditional EMD and addressing redundancy and frequency-discontinuity problems in continuous wavelet transform (CWT). These enhancements will aid in accurately characterizing struc- Conclusions This paper presents an experimental device consisting of a shock tube system and an adjustable steel plate, designed to investigate the transient shock vibration response of thinwalled structures. By applying shock loads at various positions and using accelerometers to collect data, a comprehensive understanding of the vibration response characteristics has been achieved. To analyze the vibration signals in the time-frequency domain, the research introduced two methods, the wavelet transform and the EEMD with pchip interpolation. The accuracy of these methods is validated, and the results provide information on the distribution and attenuation of vibration energy across various frequency bands. The study reveals several findings, which can be summarized as follows: 1. Improved Signal Analysis: We have refined time-frequency domain analysis by using the Ensemble Empirical Mode Decomposition (EEMD) method with pchip interpolation. This significantly enhances the accuracy, tackling mode mixing issues in traditional EMD and addressing redundancy and frequency-discontinuity problems in continuous wavelet transform (CWT). These enhancements will aid in accurately characterizing structure vibrations, thereby facilitating the development of more shock-resistant designs. 2. Boundary Vibration Attenuation: Our findings show that the boundary regions exhibit the fastest rate of vibration attenuation. This could guide the design and placement of sensitive components within a structure to minimize sustained vibrations. 3. Frequency Band Implications: The research reveals that the 100-300 Hz frequency band sustains the longest vibration duration for most shock conditions, providing pivotal knowledge for predicting potential failure areas or developing frequency-specific vibration dampening methods. 4. Influence of Shock Position: The study underscores that shock positioning significantly impacts vibration characteristics, and this insight can be incorporated into design strategies for specific shock or load conditions. 5. Asymmetric Shock Impact: We found that as the shock position moved towards the right side, the vibration amplitude and duration decreased. This implies that asymmetric shock loads can influence the overall vibrational characteristics, which is a critical consideration for improving equipment resilience and performance.
2023-07-12T06:02:47.973Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "10cc5e5791f895bac3855d383b547d5398599ff8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/13/4748/pdf?version=1688123665", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a55792a28a5735d04cbfeb5fb5cfa11fb60d116", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
239886275
pes2o/s2orc
v3-fos-license
A Narrative Review of Ultrasound Technologies for the Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Abstract The incidence and mortality rate of breast cancer (BC) in women currently ranks first worldwide, and neoadjuvant chemotherapy (NAC) is widely used in patients with BC. A variety of imaging assessment methods have been used to predict and evaluate the response to NAC. Ultrasound (US) has many advantages, such as being inexpensive and offering a convenient modality for follow-up detection without radiation emission. Although conventional grayscale US is typically used to predict the response to NAC, this approach is limited in its ability to distinguish viable tumor tissue from fibrotic scar tissue. Contrast-enhanced ultrasound (CEUS) combined with a time-intensity curve (TIC) not only provides information on blood perfusion but also reveals a variety of quantitative parameters; elastography has the potential capacity to predict NAC efficiency by evaluating tissue stiffness. Both CEUS and elastography can greatly improve the accuracy of predicting NAC responses. Other US techniques, including three-dimensional (3D) techniques, quantitative ultrasound (QUS) and US-guided near-infrared (NIR) diffuse optical tomography (DOT) systems, also have advantages in assessing NAC response. This paper reviews the different US technologies used for predicting NAC response in BC patients based on the previous literature. Introduction The incidence and mortality rate of breast cancer (BC) in women currently ranks first worldwide, 1 and based on current NCCN guidelines, patients with different molecular types of BC should undergo neoadjuvant chemotherapy (NAC) regardless of their primary stage or aim of surgery. 2 NAC not only reduces tumor grade and improves patients' opportunities for surgery, but also allows new advances in cancer management by identifying new genetic pathways and drugs involved in cancer and improving patient survival. [3][4][5][6][7] However, some patients receiving NAC may gradually develop resistance to drugs during the process of chemotherapy, which limits its clinical efficacy and leads to treatment failure. These patients would benefit from planning treatment and surgery methods by assessing the response to NAC considering new treatment methods. 8 Chemotherapy can produce physiological and psychological side effects of varying degrees. Non-responders need to be accurately and promptly identified to avoid ineffective chemotherapy and side effects. 9,10 As a result, accurately monitoring and evaluating the efficacy of NAC is becoming increasingly crucial. Grayscale US for Assessing Response to NAC US imaging is based on the conduction and reflection of high-frequency mechanical sound waves in the tissue, and the information of ultrasonic impulses and their reflections as echoes are converted and processed into real-time images. 20 Grayscale US has been widely used to distinguish malignant from benign breast tumors, and it has also been applied to predict the response to NAC. Grayscale US is primarily applied to measure changes in tumor size after NAC. Most researchers have demonstrated that US underestimates breast tumor size. [21][22][23][24] However, it was reported that US overestimates BC tumor size. Vriens et al examined 182 patients with BC and found that US overestimated size in 20% of patients. 25 Whether US has an advantage in estimating tumor size compared with other imaging methods has not been determined. Stein et al measured 6543 primary BC patients and found that the correlation with histology was 0.61 for MM and 0.60 for US. 26 They demonstrated that the prediction of tumor size by US and MM was coincident. The result was the same as that of Chagpar et al study. 27 However, Keune et al reported that US correctly measures residual tumor size in response to NAC more often than MM (91.3% vs 51.9%). 28 Both Vriens et al 25 and Choi et al 29 study compared the role of MRI and US in measuring tumor size and found that both were consistent in estimating size. The sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), falsenegative rates (FNR) and false-positive rates (FPR) were 36-61%, 78-90%, 71-74%, 60-67%, 29-75%, 20-38%, 35-60%, respectively, for US-predicted remission in BC after NAC. [30][31][32][33][34][35] The reason for the lower prediction efficiency might be due to the potential limitation of the ability of US to distinguish viable tumor tissue from fibrotic scar tissue. [30][31][32] Contrast-Enhanced Ultrasound (CEUS) for Assessing Response to NAC There is currently increasing awareness that anatomical approaches based on measuring tumor size have substantial limitations and that morphological changes often manifest later than functional changes. 36 As a result, CEUS, dynamic contrast-enhanced MRI (DCE-MRI) and fluorodeoxyglucose positron emission tomography and computed tomography (FDG-PET/CT), which can quantify tumor functions, are playing increasingly larger roles in evaluating and predicting the response to NAC. CEUS might be one of the most direct imaging tools for visualizing perfusion changes in tumors. 37 In fact, the development of contrast agents has improved the diagnostic accuracy of US over the past decade. 38 Sonovue, a secondgeneration US contrast agent, has a microbubble diameter of 2 to 4 mm. It can remain within the blood pool because it is close to the diameter of blood cells and cannot diffuse into the interstitial space. CEUS has been widely used in https://doi.org/10.2147/CMAR.S331665 DovePress Cancer Management and Research 2021:13 7886 the ultrasonic diagnosis of diseases of various organs throughout the body. In recent years, CEUS has also been used to evaluate the efficacy of NAC, and it was demonstrated to be effective for assessing the response to NAC in BC patients compared to other methods. 25,39 Previous studies showed that the sensitivity, specificity, accuracy, PPV, NPV, area under the curve (AUC), positive and negative likelihood ratios (LR+ and LR-) and diagnostic odds ratios (ORs) of CEUS imaging for predicting response to NAC were 85-96%, 78-87%, 84-86%, 92%, 78%, 0.71-0.92, 4.49-5.5, 0.15-0.16 and 32.21-36, respectively. 16,[40][41][42][43][44] Currently, CEUS is recommended for the evaluation of the efficacy of NAC and may be a promising tool for the evaluation of NAC response. 45 Imaging may underestimate or overestimate the size of breast tumors, as mentioned above. It was found that the tumor sizes measured both before and after NAC by CEUS were larger than those measured by conventional grayscale US. 40 Although CEUS has no significant advantage in the accuracy of tumor size measurement before NAC, it has a higher correlation with pathological results after NAC, and the measurement accuracy is higher than that of other imaging methods. 45 In addition, necrosis at the tumor center was found to be detected early by CEUS, which showed a local blood perfusion defect, whereas conventional grayscale US was unable to detect whether there was necrosis unless there was a fluid area. 40 The TIC of CEUS reflects the speed and quantity of the contrast agent in the tumor microvasculature. TIC analysis is a state-of-the-art technique for CEUS video quantification that contains a variety of blood perfusion quantitative parameters. 40,46,47 The quantitative parameters of blood perfusion are slightly different according to different TIC analysis software. Generally, blood perfusion quantitative parameters mainly include PI (peak intensity, the maximum intensity of the time-intensity curve during bolus transit), TTP (time to peak, time needed to reach peak intensity), RT (rise time, the difference between TTP and the time the first microbubble reached the lesion), MTT (mean transit time, circulation time of contrast agent in the area under investigation), WIS (wash-in slope, the ratio of PI to TTP), AS (ascending slope, the slope of the ascending branch of TIC, which showed mean perfusion speed after presence of the contrast agent and reflected local tissue perfusion rate), AUC and so on. TIC analyses of BC usually show shortening of RT and TTP and increasing PI and AS. Chemotherapy causes cytotoxic tumor cell death, resulting in reduced tissue vascular endothelial growth factor levels, apoptosis of immature endothelial cells with secondary vascular shutdown, and decreased blood perfusion, which in turn leads to slower wash-in of contrast agents. 48 However, a poor response to NAC results in ongoing production of angiogenic factors that might maintain or increase the proportion of immature vessels. 49,50 According to the changes in blood vessels in the tumor after NAC PCR, PI, 16,40,49 WIS, 40 AS 16 and AUC 51 were found to be significantly decreased, and TTP, 40,43,49 MTT 49 and RT 44 were significantly increased. Although multiple logistic regression analysis revealed that changes in quantitative parameters of TIC and diameter of the tumor were significant independent predictors of pCR, the parameters of TIC, which are used as a functional technique to evaluate tumor response to NAC, were superior and predicted earlier in response to NAC than changes in tumor size. 49 In fact, placement of the regions of interest (ROIs) in CEUS research is also crucial for quantitative TIC analysis. 52,53 TIC parameters were analyzed for different ROIs in Lee et al research. 54 ROI 1, ROI 2, ROI 3, and ROI 4 targeted the hotspot area of greatest enhancement, area of hyperenhancement, entire tumor on grayscale ultrasound and normal parenchyma, respectively, and the results led to a recommendation for the less subjective ROI 2 or ROI 3. Studies have confirmed that breast CEUS combined with TIC curve analysis not only provides information about blood perfusion but also provides a variety of quantitative parameters that greatly improve the accuracy of predicting the response to NAC. Elastography for Assessing the Response to NAC Elastography, which is similar to palpation but more sensitive and objective, can assess tissue stiffness. Elastography is a recently developed, convenient and noninvasive imaging technique. There are two types of techniques in elastography: strain elastography and shear wave elastography. The strain elastography technique includes strain elastography (SE) and acoustic radiation force impulse (ARFI), while the shear wave elastography technique includes transient elastography (TE), point shear wave elastography (pSWE) and shear wave elastography (SWE). The most common methods that are widely used in the clinic are SE and SWE. In general, the ability of elastography to predict the efficacy of NAC was significantly higher than that of grayscale ultrasound and comparable to that of CEUS and MRI. 42,55,56 The sensitivity, specificity, accuracy, PPV, NPV and AUC of predicting the efficacy of NAC by elastography were found to be 59-100%, 63-100%, 74-83%, 71-79%, 66-86% and 0.75-0.88, respectively, in previous studies. 42,[56][57][58][59] SE is an imaging modality based on mechanical properties that uses manual compression and measures the degree of tissue deformation. 60 The Tsukuba elasticity score (TES) can be used for scoring according to different colors, which represent the different stiffnesses of the tissue. 61 In addition to the elastography score, the strain rate (SR) can be calculated by software that measures the strain rate ratio of elastography according to the ratio of stiffness between the lesion area and the surrounding normal tissue. 62,63 Stiffer masses are more likely to represent malignancies. 64 Hayashi et al first reported tumor stiffness assessed using SE. 65 They examined the correlation between elasticity scores and treatment responses. The results demonstrated that stiffer tumors were less likely to exhibit clinical responses and pCR than more compliant tumors. However, other research demonstrated that SE imaging before NAC had no relationship with pathological complete response. 64 The change in score and SR of elastography after NAC can be acquired by evaluating the score and SR of elastography before and after NAC. The score and SR of elastography of the tumors obtaining pCR were found to be decreased, and the degree of reduction was higher than those without pCR. 57,58 SE can predict the efficacy of NAC earlier. Falou et al found that SE can predict the response of tumors to NAC in the medium term (4 weeks) by using SR, and the sensitivity and specificity were 100%. 57 Fernandes et al demonstrated that SR can predict the pCR of tumors in the early stage (2 weeks) 58 with a sensitivity and specificity of 84% and 85%, respectively. Wang et al analyzed the parameters of CUES and SE before and after NAC for BC, and the results showed that the AUCs of CEUS and SE in predicting the efficacy of NAC were 0.86 and 0.72, respectively, and the predictive accuracies were 89% and 91%, respectively. 42 They indicated that CEUS and SE had equal advantages in predicting the response to NAC, which was the same conclusion as that of the Gu et al study. 64 However, SE also has certain limitations. First, SE is highly dependent on the operators due to manual pressurization. As a result, human error can occur due to the difference in the measurement by different operators. Second, the elastography score is based on the visual observation of the color of the ROI, which is highly subjective and may also introduce human error. SWE, which uses acoustic radiation force to introduce a disturbance and measures the speed of propagation of shear waves, can be used in conjunction with Hooke's law to derive the Young's modulus of tissue. 60,66,67 The parameter maximum stiffness (Emax), mean stiffness (Emean), and standard deviation (SD), where E is defined by Young's modulus and measured in kilopascals and SD gives an indication of heterogeneity, refer to the elasticity values of the tissue within an ROI outlined on the US image. High mean stiffness values are more likely to be malignant. [66][67][68][69][70] SWE can play a complementary role to conventional US. 66,67 Stiffer tumors were found to be less likely to exhibit clinical responses and pCR. 71 Young's modulus of tumors with pCR after NAC was lower than that of tumors without pCR. 59 The change in Young's modulus before and after NAC could be calculated, and the larger the change was, the more likely was the indication of a response to NAC. 56,59,72 SWE is also believed to be an effective early predictor of pCR after NAC. Evans et al demonstrated that the changes in tumor stiffness after 3 weeks of NAC were closely related to pCR. 56 Jing et al showed that the response to NAC could be predicted by the change in Young's modulus two weeks after NAC, and the predictive sensitivity and specificity were 73% and 86%, respectively. 72 One advantage of SWE is that residual fibrous masses with no residual cancer tend to appear compliant in SWE, so that assessment by SWE is less prone to errors. However, there are a few limitations of SWE. It is difficult to assess deep lesions in BC patents, and the technique is influenced by patients' breathing. 56 Three-Dimensional (3D) Techniques for Assessing Response to NAC A 3D automated breast volume scanner (ABVS) is another novel and innovative imaging technique in breast ultrasound. ABVS is not only an observer-independent, automated and standardized method but also provides a large field of view using high-frequency transducers, and computer-aided detection software significantly reduces interpretation time. 73 ABVS has the ability to calculate volume using 3D imaging software, and multiple studies have shown a higher correlation with histopathological tumor response than conventional US. 73 7888 absolute concordance in 73% of patients for mid-NAC evaluation. Tumor response to NAC in midtreatment was evaluated using the product change in the two largest perpendicular diameters (PC) or the longest diameter change (LDC) in the study by Wang et al. 75 All four prediction methods (PC on axial planes, LDC on axial planes, PC on coronal planes and LDC on coronal planes) displayed high AUCs (0.83-0.89), and the sensitivity, specificity, PPV, and NPV were 86-88%, 62-85%, 28-51%, 96-98%, respectively. ABVS was used to predict final pCR after 4 cycles of chemotherapy. The results demonstrated that the ABVS is a useful tool for the evaluation of pCR after NAC. There are also additional 3D technologies applied in US, such as 3D power color US and 3D CEUS. Folkman et al emphasized the importance of angiogenesis in tumor growth. 76 Vascular endothelial growth factors (VEGFs) are important regulatory cytokines, and microvessel density (MVD) has been the gold standard for assessing tumor angiogenesis. 3D power color flow with high-definition flow (HDF) technology facilitates the detection of vascular morphology imaging and better discriminates malignant breast tumors from benign lesions than 2D Doppler US. 3D power Doppler US provides high-resolution Doppler signal reflection and is not limited by the angle of the vessel, allowing 3D imaging of relevant vessels and observation of a tumor vascular mass by comparing perfusion changes and vessel density before and after NAC. 77-79 3D power Doppler US with HDF was considered to accurately predict response according to determination of changes in vascularity after NAC. The vascularization index (VI, the ratio between the color boxes and the total number of voxels in the volume of interest, representing the vessel density in the defined volume), flow index (FI, the mean energy of the voxel per color, representing the average intensity of flow), and vascularization-flow index (VFI, mean color value for all voxels in the volume, representing the intensity of both vascularization and flow) were extracted for estimation of vascularization and flow index. It was demonstrated that the most accurate prediction of pCR was achieved after the second chemotherapy treatment, with an accuracy of 88% and an AUC of 0.76. 80 The combined advantages of CEUS and 3D-US, 3D-CEUS, can evaluate tumor vascularity in a threedimensional field. In Jia's research, the 3D-CEUS score and DCE-MRI score were calculated according to the distribution and shape of breast tumor blood vessels in 3D-CEUS, and the enhancement pattern and index of tumor blood vessels in DCE-MRI, MVD and VEGF were also calculated. 81 The results showed that the 3D-CEUS score, DCE-MRI score, MVD and VEGF were significantly decreased after NAC. It was also shown that the efficacy of 3D-CEUS and DCE-MRI in predicting pCR after NAC was consistent. The sensitivity, specificity, PPV, NPV, and AUC were 88% and 88%, 100% and 100%, 100% and 100%, 98% and 98% and 0.93 and 0.99, respectively, when the 3D-CEUS score was <1 and Δ3D-CEUS score was >6 after NAC. The study also demonstrated that the 3D-CEUS score was significantly correlated with MVD and VEGF both before and after NAC, while DCE-MRI was not correlated with MVD or VEGF. These results showed that 3D-CEUS is effective for assessing the response to NAC. Although 3D power color US and 3D-CEUS are rarely used in the clinic, they have a high accuracy in predicting the efficacy of NAC and are related to angiogenic factors, and 3D techniques still have certain development prospects. Other US Techniques for Assessing Response to NAC Changes in tumor echogenicity have also been applied to predict the response to NAC. Matsuda et al retrospectively examined 52 patients with triple-negative BC who received NAC and calculated changes in echogenicity. 82 Pixel values are represented in black to white as different numeric values. It was demonstrated that the echogenicity changes (ratio and difference) in tumor and fat regions before and after NAC would predict pCR. The sensitivity, specificity and AUC were 70-74%, 81-82% and 0.78-0.80, respectively. Dobruch-Sobczak et al found that the changes in the echogenicity of tumors after 3 courses of NAC exhibited the strongest statistical correlation with the percentage of residual malignant cells used in histopathology to assess the response to treatment (odds ratio=60). 83 Changes in tumor echogenicity were demonstrated to predict response with satisfactory accuracy and may be considered for early NAC monitoring. Assessing changes in echogenicity is based on the backscattered spectrum of QUS. 84 NAC can induce microstructural transformation, cell death, heterogeneous tumor morphology and edema. These cellular organizational changes alter US backscatter and increase tumor echogenicity. 85 Conventional US imaging involves "B-mode" images and loses much of the frequencydependent information with the conversion of radiofrequency (RF) data. Compared to conventional US, QUS imaging retains these RF data and displays the data as a frequency spectrum using a fast Fourier transform (FFT) algorithm. 86 As a result, QUS was found to predict the response to NAC more accurately by supplying more information from US images. QUS uses variation in the acoustic property within tissues to characterize microstructural features and has been used in the detection of tumor response to chemotherapy of cancer. 87 QUS parameters include MBF (midband fit), SS (spectral slope), SI (spectral intercept), ACE (attenuation coefficient estimate), SAS (spacing among scatterers), ASD (acoustic-scatterer diameter) and AAC (average acoustic-scatterer concentration). Sannachi et al analyzed QUS parameters and texture features extracted from QUS parameters. 87 The accuracies ranged from 68% to 92% according to different multifeature response classification algorithms (a linear discriminant analysis (LDA), a k-nearest-neighbor classifier (KNN), and a radial-basisfunction support vector machine classifier (SVM-RBF)) to differentiate treatment responders. Tadayyon et al analyzed the capacity for predicting the response to NAC with QUS and an artificial neural network (ANN) classifier. 88 They found that the sensitivity, specificity, accuracy, and AUC of the QUS model with an ANN classifier for predicting response were 89%, 85%, 87%, and 0.90, respectively. 7891 Changes in QUS parameters, particularly ultrasound backscatter intensity-based parameters, could distinguish responders and nonresponders at a later stage (week 8), while texture features could distinguish responders and nonresponders at early stages (weeks 1 and 4). The authors demonstrated an early-stage treatment response prediction model developed by combining QUS with texture analysis. 87 Another study by Piotrzkowska-Wróblewska et al demonstrated that an integrated backscatter marker of QUS can better characterize the tumor pathological response and at an earlier stage (after the second and third NAC courses) of therapy with AUCs of 0.69 and 0.82, respectively. 89 Recently, it was reported that the NIR technique utilizes intrinsic hemoglobin contrast, which is directly related to tumor angiogenesis and has shown great potential for evaluating tumor vasculature and oxygen consumption responses to NAC by optical tomography and optical spectroscopy. 90 DOT is an optical imaging technique that uses near-infrared light to probe the absorption and scattering properties of biological tissues and to acquire information on tumor physiology, biochemistry, angiogenesis, and hypoxia. 91-95 DOT combined with US has been explored for breast cancer diagnosis and monitoring NAC responses in BC. Combining pretreatment hemoglobin content and hemoglobin changes measured at early treatment cycles with standard pathological variables improves the predictive accuracy of NAC. 96 The size and total hemoglobin concentration (THC) of the lesions were measured 1 day before biopsy and 1 to 2 days before surgery to predict BC response to NAC using US-guided DOT in the research of Zhi et al. 91 The sensitivity, specificity, accuracy, PPV, NPV, and AUC were 74% and 80%, 77% and 53%, 77% and 38%, 93% and 88%, 74% and 77%, 0.75 and 0.69, respectively, when ΔTHC was 23.9% and ΔSIZE was 42.6%. Moreover, ΔTHC and ΔSIZE can be used for response evaluation and earlier prediction of the response after three rounds of NAC. The authors considered US-guided DOT to be useful for early evaluation and prediction of the response to NAC. The sensitivity, specificity, accuracy, PPV, and NPV of different US assessment methods for the prediction of neoadjuvant chemotherapy response in BC according to the results of the literature review are shown in Table 1. In addition, Figures 1 and 2 show US manifestations of different assessment methods in BC after NAC. Conclusions The goals of NAC have anecdotally been to enable breast conservation and prognostic information. US enables the advantages of portability, low cost, convenient follow-up detection without radiation emission and so on. In general, grayscale US has a low accuracy in the evaluation of the efficacy of NAC, while CEUS, elastography, 3D techniques, and other US techniques (QUS, DOT) can improve the accuracy of prediction. US technology combined with functional examination would be a great development prospect in assessing the response to NAC, and US is expected to be increasingly accepted to predict the efficacy of NAC in the future. However, this paper reviewed only the changes in BC mass after NAC by US and did not examine changes in lymph nodes in the corresponding region. A more detailed review would incorporate the changes in BC masses and lymph nodes observed by US after NAC.
2021-10-16T15:14:48.779Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "5364fdf7015eb9c9b9224500971030789274d4f7", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc8523361?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ee7d1a8ddb18ea3b5328650bc524ad3ecfb57dc3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
119274858
pes2o/s2orc
v3-fos-license
Genuine Tripartite Entanglement in a Spin-Star Network at Thermal Equilibrium In a recent paper [M. Huber {\it et al}, Phys. Rev. Lett. {\bf 104}, 210501 (2010)] new criteria to find out the presence of multipartite entanglement have been given. We exploit these tools in order to study thermal entanglement in a spin-star network made of three peripheral spins interacting with a central one. Genuine tripartite entanglement is found in a wide range of the relevant parameters. A comparison between predictions based on the new criteria and on the tripartite negativity is also given. In a recent paper [M. Huber et al, Phys. Rev. Lett. 104, 210501 (2010)] new criteria to find out the presence of multipartite entanglement have been given. We exploit these tools in order to study thermal entanglement in a spin-star network made of three peripheral spins interacting with a central one. Genuine tripartite entanglement is found in a wide range of the relevant parameters. A comparison between predictions based on the new criteria and on the tripartite negativity is also given. I. INTRODUCTION Entanglement has been widely studied for decades: criteria to find out the presence of bipartite entanglement in a quantum state are well known [1], and, for systems with few degrees of freedom, it can be quantified [2], whether the relevant state is pure or mixed. The analysis of multipartite entanglement is a more complicated task. For example, there have been many proposals of tripartite entanglement quantifiers [3][4][5][6] and witnesses [7][8][9], but none of such contributions have given a definitive solution to the problem of singling out and quantifying this type of correlations [10]. The three-tangle has been considered a good tool able to quantify tripartite entanglement in pure states [3], but recently it has been criticized [11]. Difficulties grow up when the system is described by a mixed state. Indeed, many of the proposals previously mentioned are valid only for pure states. An interesting tool for detecting tripartite correlations in mixed states has been presented by Sabin and Garcia-Alcaine [12], but the tripartite negativity they introduced (i.e., the geometric mean of the three negativities associated to the three possible bipartitions of a tripartite system) is not able to tell a genuine tripartite entangled state from a state which is biseparable in a generalized sense. Very recently, Huber et al [13] have given a set of relations that provide sufficient conditions to assert the presence of multipartite entanglement in an indisputable way, whether the state under scrutiny is pure or mixed. The basic idea of such criteria is to exclude the presence of any form of biseparability, in connection with all the possible bipartitions. Over the last decade, the concept of thermal entanglement has emerged by investigating the presence of quantum correlations in quantum systems at thermal equilibrium [14]. In this context, the existence of quantum correlations have been put in connection with phase transitions [15,16]. Thermal entanglement has been studied in spin chains described by Heisenberg models [17], in atom-cavity systems [18], in simple molecular models [19], and has been proposed as a resource in quantum * Electronic address: bdmilite@fisica.unipa.it teleportation protocols [20]. Nonclassical and nonlocal correlations in thermalized quantum systems have been investigated [21,22]. Thermal entanglement has been studied in spin-star networks. For instance, Hutton and Bose [23] have analyzed the zero-temperature properties of such quantum systems, bringing to light interesting properties related to the parity of the number of outer (peripheral) spins. Recently, Wan-Li et al have studied the thermal entanglement in a spin-star network with three peripheral spins [24], evaluating pairwise entanglement between all possible couples of spins. More recently, Anzà et al [25] have analyzed tripartite correlations in a similar system, exploiting the tripartite negativity. Nevertheless, as already pointed out, such a tool cannot distinguish between tripartite entanglement and generalized biseparability. In this paper, we investigate tripartite entanglement in the same system analyzed by Anzà et al, but exploiting the new criteria introduced by Huber et al. To this end, in the next section we summarize the results of ref [13] and specialize them to the three-spin case. In the third section we apply these tools to a thermalized spinstar network made of three peripheral spins interacting with a central one, bringing to light the presence of genuine tripartite thermal entanglement. Finally, in the last section, we discuss our results and give some conclusive remarks. II. DETECTION OF TRIPARTITE ENTANGLEMENT In a recent paper by Huber et al [13], it has been shown that given a biseparable density operator ρ acting on the Hilbert space H, whether corresponding to pure or mixed state, for any completely separable state |Ψ of the duplicated Hilbert space H ⊗ H, it turns out that where i runs over all possible bipartitions of the system. The operator Π performs swapping between the two parts of the duplicated Hilbert space, in the following way: Moreover, for a bipartition of the system (A i , B i ) and any separable state |Ψ = |ψ Ai ⊗ |ψ Bi ⊗ |χ Ai ⊗ |χ Bi ǫ H ⊗ H, one has: On the basis of (1), the occurrence of the condition Q > 0 for some trial state |Ψ guarantees that state ρ possesses genuine multipartite entanglement, in the sense that it is neither simply biseparable nor biseparable in a generalized sense (i.e., a state of the form ρ = i p i ρ Ai ⊗ ρ Bi ). Therefore, after introducing the positive part of Q, for a finite-dimensional Hilbert space, we can use the following condition, as a sufficient condition to assert that ρ is a multipartite entangled state. In (5) the integration is meant over all possible completely separable states of the H ⊗ H Hilbert space. This means that, if the state |Ψ depends on P parameters, q 1 , ... q P , one has dΨ = P k=1 dq k . Let us specialize this analysis to a three-spin system. In order to afford numerical calculation, we prevent integration over all possible trial states, and we consider only the special case where This choice gives rise to the following expression for the quantity Q: where the first bra in a product ψ 3 | ψ 2 | ψ 1 | refers to the third spin, and so on. The relevant positive part is: In order to further simplify the calculation associated to (5), we introduce the following nonnegative quantity: where integration over the longitudinal angles, φ and ξ, has been replaced by a finite sum. The analytical calculation of the same quantity for the state |W = (|110 + |101 + |011 )/ √ 3 can be easily carried on, and it gives a result very similar to that obtained for the W -state, provided the swapping of all the trigonometric functions: sin ⇆ cos. Moreover, we have performed the same analysis for separable states of different kinds, and we have always found that the corresponding C-function is zero everywhere. Integration of the functions plotted in Fig. 1 provides I (1) for the two states. Performing the integration over θ and η with a 15 × 15, grid we have got I (1) (ρ W ) ≈ 0.36 and I (1) (ρ GHZ ) ≈ 0.75, while, spanning over four remarkable longitudinal angles (0, π/2, π, 3π/2), we have got I (4) (ρ W ) ≈ 2.87 and I (4) (ρ GHZ ) ≈ 11.46. In Fig. 2 we show the function I (N ) (ρ) for three classes of mixed states: mixtures of |GHZ and |W , mixtures of |GHZ and the factorized state |111 , and mixtures of |W and |111 . In this figure and in the next analogous ones, we plot the ratios between I (N ) (ρ) and I (N ) 0 = I (N ) (ρ GHZ ). It is well visible that I (1) (ρ) and I (4) (ρ) approach zero as the state approaches a factorized state, while these quantities reach higher values as the state possesses tripartite entanglement. This analysis supports the idea that the criteria introduced in [13] are quite effective in revealing genuine tripartite entanglement. Nevertheless, it is important to note that the subset of trial states considered plays a very fundamental role in the detection of multipartite entanglement. Indeed, if we consider for example trial states of the form |Ψ = |θ, φ ⊗ |θ, φ ⊗ |η, ξ ⊗ |η, ξ ⊗ |η, ξ ⊗ |θ, φ , then we are not able to detect entanglement of the GHZ-state. On the contrary, this choice is able to detect tripartite entanglement of the state |σGHZ = (|110 + |001 )/ √ 2, which instead never violates the inequality Q ≤ 0 when the trial state has the form given in (6). In fact, on the one hand, it is I (1) (ρ σGHZ ) = I (4) (ρ σGHZ ) = 0, while on the other hand, in Fig. 3 we can see that, when the trial state has the form |Ψ , it turns out to be Q > 0 in a wide range. In spite of these limitations related to spanning a subset of the relevant Hilbert space, we will use the functionals I (N ) (ρ) defined in (10) to carry on our analysis, both for the sake of simplicity and since we think it is effective enough in our problem. III. THERMAL TRIPARTITE ENTANGLEMENT Spin-star networks have been studied in connection with decoherence problems, especially in the analysis of the Non-Markovian character of spin baths [26], and for applications in quantum information [27]. In a recent paper, Wan-Li et al [24] have studied the thermal entanglement in a spin-star system made of a central spin coupled to three peripheral spins through an anisotropic σ-σ interactions (the longitudinal ('σ z -σ z ') interaction and the total transverse interaction ('σ x -σ x '+ 'σ y -σ y ') have independent coupling strengths) identical for the three outer spins. More recently, a similar system has been studied, removing the longitudinal (i.e., σ z − σ z ) interaction from the coupling between the spins, and introducing a certain inhomogeneity in the coupling strengths between the central spin and the outer ones [25]. Here we examine the same model, then considering whereσ k α is the Pauli operator along the direction α (α = x, y, z) of the spin k (k = 0, 1, 2, 3),σ k ± are the corresponding raising and lowering operators, ω 0 is the free Bohr frequency of all the spins due to an external magnetic field, and c k is the coupling constant between the spin 0 and the k-th one. Once the system reaches the thermodynamical equilibrium, it can be described by the thermal state, which has the same eigenstates of the Hamiltonian H. The result of the diagonalization of H is reported in the Appendix A. In ref [25], Anzà et al have considered the homogeneous case (c 1 = c 2 = c 3 = c) and different kinds of inhomogeneous models. In the following we will consider both the homogeneous model and the inhomogeneous case c 1 = c 3 = c and c 2 = c x, with x a dimensionless inhomogeneity parameter. We will apply the new criteria for multipartite entanglement detection to the state obtained starting from the four-qubit thermal state and tracing over the degrees of freedom of the central spin: which describes the three peripheral spins. A. Homogeneous Model The homogeneous model has been studied by Wan-Li et al [24] (with the addition of a longitudinal coupling) and by Anzà et al [25]. In the first paper, the pairwise entanglement has been studied, through the use of concurrences. In the second paper, tripartite correlations have been investigated, through the use of the tripartite negativity [12]. Tripartite negativity is an imperfect tool to detect genuine tripartite entanglement, since it cannot distinguish between this form of entanglement and generalized biseparability. Nevertheless, it has helped to find points wherein tripartite correlations are significant, even if to disclose the nature of these correlations one needs a further analysis. On the basis of the criteria proposed in ref [13], it is possible to assert in an indisputable way the presence of tripartite entanglement when the condition Q > 0 is fulfilled. The quantity in (5) and its simplified version in (10), provide sufficient conditions for the presence of tripartite entanglement. Moreover, one could think that they furnish sorts of degree of entanglement, in the sense that higher values of these quantities can be understood as higher or wider violations of the inequality Q ≤ 0. Notwithstanding, it is important to stress that neither I (N ) nor I provide a measure of entanglement, and that in the case of I (N ) there is also the problem that a limited part of the relevant Hilbert space is spanned in the integration process, as already pointed out. Fig. 4 and 5 show the tripartite negativity N (ρ (P) ) and the quantity I (1) (ρ (P) ), respectively, as functions of both the temperature and the coupling constant between the central spin and the peripheral ones. Fig. 6 shows the quantities I (N ) (ρ (P) ), for N = 1 and N = 4, as functions of the coupling constant, at low temperature. The behaviors are qualitatively very similar: for increasing temperature the quantity I (1) (ρ (P) ) decreases, while at very low temperature abrupt changes are well visible at specific values of the coupling constant. In particular, for kT /ω 0 = 0.01, around the value of the coupling constant c = 0.6ω 0 , there is a first transition from 0 to a positive value, and around c = 3.7ω 0 another transition is well visible. These transitions, revealed by all the witness quantities here considered, correspond to very abrupt changes of the ground state of the four-qubit system. In particular, for c < 0.6ω 0 the ground state is |ψ 8 = |0000 , for 0.6ω 0 < c < 3.7ω 0 the lowest energy state is ψ − 4 , and for c > 3.7ω 0 the ground state is ψ − 2 (see Appendix A for the explicit expression of these states). The corresponding three-qubit states are: ρ (P) ≈ ρ (8) = |000 000|, ρ (P) ≈ ρ (4−) = 0.5 |111 111| + 0.5 |W W |, and ρ (P) ≈ ρ (2−) = 0.5 |W W | + 0.5 |W W |, respectively. B. Inhomogeneous Model In [25], it has also been analyzed the effect of anisotropy in the coupling constants. The analysis based on the tripartite negativity shows that, in spite of the lack of symmetry of the system, the degree of correlation between the three peripheral spins can still be appreciable. In particular, it has been brought to the light the fact that at low temperature the maximum of tripartite negativity is reached for values of the inhomogeneity parameter different from (larger than) unity. Such behavior is well visible in Fig. 7 and Fig. 9a. This unexpected result seemingly suggests that the maximum of tripartite correlations does not correspond to the maximum of symmetry of the system. It can be interesting to compare such results with those coming from the tools based on the work by Huber et al [13]. Fig. 8 shows the quantity I (1) (ρ (P) ) as a function of temperature and anisotropy parameter x, for c = 6ω 0 . Fig. 9 shows the low temperature profiles, where fast transitions are very well visible. The local maximum of the tripartite negativity around x = 2.5 is appreciable. On the contrary, the quantity I (4) (ρ (P) ) does not exhibit the same behavior. Instead, it does possess a maximum in x = 1. At low temperature, both N (ρ (P) ) and the I (N ) functionals have significant values for intermediate values of x, say for 0.5 x 5, and are small or vanishing out of this region. Note that for very small x (c 2 ≪ c 1 , c 3 ) one of the spins is almost uncoupled to the central one. On the other hand, for large x (c 2 ≫ c 1 , c 3 ) that spin has a much stronger coupling constant than the other two, whose couplings can then be considered as a perturbation, so that, at zeroth order, the latter two spins are uncoupled to the central one. In both cases, it is physically reasonable that tripartite correlations between the outer spins are negligible or absent. The abrupt changes of the witness quantities are related to the sudden modifications of the ground state of the system. However, we remark that low or vanishing values of the tripartite negativity and I (N ) (or even I) do not guarantee the absence of tripartite entanglement or tripartite correla- Figure (a) shows the low temperature profile of the tripartite negativity of the peripheral state N (ρ (P) ) versus the inhomogeneity parameter x. Figure (b) shows the low temperature profiles of I (1) (ρ (P) )/I (1) 0 (solid curve, red online) and I (4) (ρ (P) )/I (4) 0 (dashed curve, blue online) . In both figures, the coupling constant is c = 6ω0 and the temperature is such that kT /ω0 = 0.01. tions. Conversely, the non-vanishing values of some I (N ) functionals guarantee the presence of tripartite entanglement. IV. DISCUSSION In this paper we have investigated the tripartite thermal entanglement in a spin-star network with three peripheral spins. The interaction with the central spin is responsible for the establishment of tripartite correlations between the peripheral ones, and such correlations survive even when the system is at thermal equilibrium. We have considered both the homogeneous model, where all the coupling constants are equal, and the inhomogeneous model, where one of the outer spins is coupled to the central one with a different strength. The analysis is carried on through the use of the quantities defined in (10), which is a simplified version of (5), where a limited region of the relevant Hilbert space is spanned. Each of these functionals has the property that its strict positivity guarantees the presence of genuine tripartite entanglement. Nevertheless, it must be clarified that none of such quantities provides a measure of the amount of tripartite entanglement. Anyway, a larger value of I does mean a higher or wider violations of the condition Q ≤ 0. Therefore, one can conjecture that a higher value of I corresponds to a state that exhibits entanglement more than other states. The same assertion is weaker when applied to I (N ) , since evaluation of this functional does not require spanning over all of the Hilbert space. Moreover, it is important to know that the use of different I-functionals (I, I (N ) with different N , or other similar quantities that consider spanning on different subsets of the relevant Hilbert space) could lead to different predictions. For the homogeneous model, the low temperature behavior is characterized by abrupt changes of the quantities I (N ) versus the coupling constant. These transitions correspond to concomitant abrupt changes of the system ground state. At higher temperature, I (1) goes to zero. For the inhomogeneous model, the dependence of I (1) on the inhomogeneity parameter and temperature, when the coupling constant is fixed at some high value, is again characterized by abrupt changes with respect to x at low temperature, and by vanishing at high temperature. It is remarkable that, at low temperature, the dependence on the inhomogeneity parameter reveals the presence of a maximum for x = 1, i.e. in the homogeneous case. This result, on the one hand is seemingly in line with expectations coming from intuition, and on the other hand is supposedly different from the predictions coming from the use of tripartite negativity. Nevertheless, different behaviors of these quantities do not imply contradictions, since neither tripartite negativity nor the I-quantities provide necessary conditions for the presence of tripartite entanglement or a measure of such form of entanglement. What is sure is that in the pa-rameter region where I (N ) is non vanishing, the thermal state possesses genuine tripartite entanglement. Therefore, in spite of the limitations of our analysis, we have found genuine tripartite entanglement in our system at thermal equilibrium, even at non vanishing temperature and in the presence of inhomogeneity.
2011-04-14T12:48:57.000Z
2010-10-08T00:00:00.000
{ "year": 2010, "sha1": "adebb5355609862bc207b34f91b466db8e526c5c", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "adebb5355609862bc207b34f91b466db8e526c5c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
263720912
pes2o/s2orc
v3-fos-license
Study on soil hydraulic properties of slope farmlands with different degrees of erosion degradation in a typical black soil region In order to explore the impact of soil erosion degradation on soil hydraulic properties of slope farmland in a typical black soil region, typical black soils with three degrees of erosion degradation (light, moderate and heavy) were selected as the research objects. The saturated hydraulic conductivity, water holding capacity and water supply capacity of the soils were analyzed, as well as their correlations with soil physicochemical properties. The results showed that the saturated hydraulic conductivity of black soils in slope farmlands decreased with erosion degradation degree, which was higher in 0–10 cm soil layer than in 10–20 cm soil layer. The water holding capacity and water supplying capacity of typical black soils also decreased with the increase of erosion degradation degree, and both of them were stronger in the upper soil than in the lower soil. With the aggravation of erosion degradation of black soils, soil organic matter content decreased while soil bulk density increased, leading to the decline of soil hydraulic conductivity. The increase of soil bulk density and the decrease of contents of organic matter and >0.25 mm water stable aggregates were the main factors leading to the decrease of soil water holding capacity. These findings provide scientific basis and basic data for rational utilization of soil water, improvement of land productivity and prevention of soil erosion. INTRODUCTION Soil hydraulic properties can usually be characterized by soil infiltration performance, soil water characteristic curve and soil water content, which are the basis for evaluating soil water conservation (Huo et al., 2018).Soil saturated hydraulic conductivity (Ks) affects surface water infiltration and runoff and sediment yield (Fares, Aiva & Nkedi-Kizza, 2000;Masís-Meléndez et al., 2014;Wu et al., 2016), which is an important parameter reflecting soil infiltration performance.The higher the saturated hydraulic conductivity, the better the soil infiltration performance.Increasing soil saturated hydraulic conductivity can delay surface runoff caused by precipitation, thus reducing soil erosion.Soil water characteristic erosion and degradation degrees (light, moderate and heavy erosion) in northeast China as the research objects, by determining soil saturated hydraulic conductivity, water holding capacity and water supply capacity, and analyzing their correlations with soil physicochemical properties, to clarify the influence mechanism of black soil erosion and degradation on soil hydraulic properties.We hypothesized that: (1) With the aggravation of soil erosion degradation, soil saturated hydraulic conductivity, water holding capacity and water supply capacity reduce continuously; (2) The aggravation of soil erosion degradation affects soil hydraulic properties mainly through decreasing soil organic matter content and affecting soil texture. MATERIALS AND METHODS The study area The study region located in Keshan Experimental Station of Heilongjiang Province Hydraulic Research Institute (125 49′42″E, 48 3′33″N) in Keshan County, Qiqihar City, Heilongjiang Province, China (Fig. 1).The landform of this area is overflowing with rivers and hills, with gentle and long slopes, and hilly terrain accounts for 80% of the total area.It is influenced by cold temperate continental monsoon climate.The annual average temperature is 2.4 C, the frost-free period is about 122 days, and the annual average precipitation is about 500 mm.More than 70% of the rainfall is concentrated between June and September, and the rain and heat are in the same period.The main soil type in this area is typical black soils, and topsoil depth is about 20 cm.The cropping system is one crop a year, soybean and corn rotation. Selection of sampling plots Slope farmlands in a back soil region have suffered from soil erosion, which leads to thinning of black soil layer, decrease of soil nutrients and crop yield (He & Xiao, 2022;Liu & Yan, 2009).It has been reported that soil erosion intensity of slope farmlands in black soil region can be categorized according to slope degree (Han & Guo, 2017;Yang, Wang & Xie, 2009).In our study, we further calculated soil loss speed and erosion modulus based on slope degree (Kang, Liu & Liu, 2017;Yan & Tang, 2005), and also investigated black soil layer thickness and crop yield (Wang, Liu & Wang, 2009;Zhang & Liu, 2020), to define soil erosion degree of slope farmlands in the black soil region.Finally, we selected three sampling sites with different degrees of erosion degradation (light, moderate and heavy erosion), based on the comprehensive consideration of slope degree, black soil layer thickness, crop yield, soil loss speed and erosion modulus.The detailed information of the three sampling sites can be seen in Table 1, and the location of these sites can be seen in Fig. 1. Soil sampling Field experiments were approved by the Heilongjiang Province Hydraulic Research Institute (12230000414003295L) and after we obtained oral permission from the administrator (Mr. Xujun Liu, the head of Keshan Experimental Station of Heilongjiang Province Hydraulic Research Institute), we collected the soil samples in June of 2022.Soil samples were collected from the lightly, moderately and seriously eroded plots.Three sampling quadrats were randomly selected from each sample plot.In each quadrat, soil samples were collected from 0-10 and 10-20 cm soil layers, respectively, by plum blossom five-point sampling method.Undisturbed soil samples and cutting ring soil samples were also collected from the two soil layers. Soil properties determination Soil bulk density was measured by the cutting ring method (Blake & Hartge, 1986).The mechanical composition of soil was measured by the pipette method (Day, 1965).Soil water-stable aggregates were determined by the wet sieve method (ISSAS, 1978).Soil organic matter content was determined by potassium dichromate external heating method (Kononova, 1961).Soil total nitrogen content was determined by the semi-micro Kjeldahl method (Bremner, 1960).Soil available phosphorus was extracted by 0.5 mol/L sodium bicarbonate solution and the concentration in extracts was determined by molybdenum antimony colorimetry method (Olsen et al., 1954).Soil available potassium was extracted by ammonium acetate and the concentration in extracts was determined by flame spectrophotometry method (Pansu & Gautheyrou, 2007).The saturated hydraulic conductivity of soil was measured by constant head method (Klute & Dirksen, 1986). The characteristic curve of soil moisture was measured by centrifuge method (Soil Moisture Determination Method, 1986). Fitting model Due to the wide range of soil texture and high fitting degree of the linear type with measured data (Van Genuchten, 1980), the Van Genuchten (VG) model has been widely used for estimating soil water characteristic curve, especially in black soil region (Gao, Gu & Li, 2018;Wang et al., 2018).Therefore, in this study, the VG model was adopted, and its expression formula (Lei, Yang & Xie, 1988) is as follows: In the above formula, θ is the volume moisture content of soil under suction(h); h r is the permanent wilting point; h s is the saturated volume moisture content; a is the suction value related to the inlet air value, which is equal to the reciprocal of the inlet air value, and the inlet air value of soil is related to the soil texture.Generally, the inlet air value of heavy clay soil is larger, while that of light soil or well-structured soil is smaller; h is soil water suction; n and m are curve shape parameters, n reflects the change of soil moisture content with soil water suction, and the value of n determines the slope of soil water characteristic curve. The larger the value of n, the slower the slope of the curve, taking m ¼ 1 À 1 n . The formula of specific water capacity is: Data processing and analysis Soil water characteristic curve was fitted by RETC software (https://www.pc-progress.com/en/default.aspx?retc).The differences in soil properties (e.g., soil bulk density, soil mechanical composition, soil organic matter content and saturated hydraulic conductivity) among sloped farmlands with different degree of erosion and degradation were analyzed by one-way ANOVA analysis, and the correlations between soil physicochemical properties (soil organic matter content, soil bulk density, sand, silt, clay) and water characteristic parameters (a, n, C (100), Ks) were analyzed by Pearson correlation analysis, using SPSS 17.0 software (SPSS Inc, Chicago, IL, USA). Saturated hydraulic conductivity of black soils Soil saturated hydraulic conductivity is an important parameter reflecting soil infiltration performance.The greater the infiltration performance of soils, the greater its water retention potential.As shown in Fig. 2, the saturated hydraulic conductivity of lightly eroded (L) slope farmland soils was between 0.04-0.11mm/min, which was higher than those of moderately eroded (M) (0.02-0.05 mm/min) and heavily eroded (H) slope farmland soils (0.01-0.04 mm/min), with a decrease range of 63.6-75%.The saturated hydraulic conductivity of soil decreased with the increase of depth, 0.04-0.11mm/min in 0-10 cm soil and 0.01-0.05mm/min in 10-20 cm soil, with a decrease range of 54.5-75%. The saturated hydraulic conductivity of lightly, moderately and heavily eroded slope farmland soils decreased by 63.6%, 60% and 75%, respectively, with the increase of depth. With the aggravation of soil erosion and degradation, soil permeability and hydraulic conductivity decreased. Water holding capacity and water supply capacity of black soils The centrifuge method was used to measure the water content of black soils in slope farmlands with different degrees of erosion degradation after natural water absorption saturation and soil water balance under different rotating speed (suction value).Then, VG equation was used to fit it.The parameter values are shown in Table 2.The correlation coefficient R 2 was above 0.7594.The VG equation can well simulate the water characteristic curves of black soils with different degradation degrees, as shown in Fig. 3.The difference between saturated water content h s and permanent wilting point h r can characterize the water holding capacity of soil.The greater the difference, the stronger the water-holding capacity of the soil.The differences of saturated water content and permanent wilting point of 0-10 and 10-20 cm soil layers were 0.4418 and 0.4245 respectively in lightly eroded sampling plot (L), 0.4076 and 0.3880 respectively in moderately eroded sampling plot (M), and 0.3783 and 0.3662 respectively in heavily eroded sampling plot (H).It can be seen that the water holding capacity of lightly eroded farmland soil was the strongest, followed by moderately eroded farmland soil, and the water holding capacity of the upper soil was stronger than that of the lower layer.Therefore, with the aggravation of erosion degradation, the water holding capacity of black soils decreased. Table 2 and Fig. 3 indicated that the parameter n characterizing the shape of water characteristic curve gradually decreased with the aggravation of black soil erosion and degradation, and the slope of water characteristic curve of heavily eroded farmland black soils was the steepest, followed by moderately eroded soil and finally lightly eroded soil.The a value of black soils listed as lightly eroded farmland < moderately eroded farmland < heavily eroded farmland.It can also be seen that with the aggravation of erosion, the content of soil clay gradually decreased, and the content of soil sand increased, which reduces the water-holding capacity of soil (Table 3). The results showed that under the same soil water suction, the specific water capacity of 0-10 cm soil layer was larger than that of 10-20 cm soil layer, and the specific water capacity of the same soil layer list as L > M > H (Fig. 4).The specific water capacity of 0-10 and 10-20 cm soil layer in M were 7.52% and 10% lower than those in L, and the specific water capacity of 0-10 and 10-20 cm soil layer in H were 7.75% and 5.73% lower than those in M, respectively (Fig. 4).Therefore, soil erosion and degradation reduce the water supply capacity of soil. Correlations between soil physicochemical properties and water characteristic parameters The relationship between soil physicochemical properties and soil erodibility was not specifically analyzed in this study.However, our results showed that with the intensification of soil erosion and degradation of slope farmlands, the contents of soil organic matter, >0.25 mm water-stable aggregates, silt and clay decreased, while soil bulk density and sand content increased, as shown in Table 3.These results indicated the close relations between soil erodibility and soil physicochemical properties for slope farmlands in black soil region.Soil hydraulic properties are further affected by soil physicochemical properties.The correlations between the physicochemical properties and water characteristic parameters of surface soil in slope farmlands with different erosion and degradation degrees were analyzed (Table 4). Parameter a was significantly negatively correlated with soil organic matter and clay content (P < 0.05), which was extremely significantly negatively correlated with >0.25 mm water-stable aggregates and silt content (P < 0.01), while it was significantly positively correlated with bulk density (P < 0.05), and extremely significantly positively correlated with sand content (P < 0.01) (Table 4).Parameter n was negatively correlated with soil bulk density and sand content (P < 0.01), and positively correlated with >0.25 mm water-stable aggregates and silt content (P < 0.01), but it was not correlated with organic matter and clay content (Table 4).The correlation between specific water capacity and soil bulk density and silt content were very significant.The specific water capacity of soil decreased with the increase of soil bulk density and the decrease of silt content (Table 4).In addition, soil specific water capacity was significantly positively correlated with soil organic matter while negatively correlated with sand content (P < 0.05).There was a significant negative correlation between saturated hydraulic conductivity and soil bulk density (P < 0.05), but no significant correlation was found between saturated hydraulic conductivity and soil organic matter, sand, silt, clay and >0.25 mm water-stable aggregates content. DISCUSSION Soil erodibility is closely related to soil physicochemical properties (Jiang, Pan & Yang, 2004;Yang, Yang & Ma, 2014).A large number of studies have shown that soil erodibility is negatively correlated with the contents of organic matter and >0.25 mm water-stable aggregates in soil, and positively correlated with soil bulk density (Fan, Zhu & Shangguan, 2023;Lu, 2022;Lv, 2021;Wang, Cui & Zhao, 2017).Soil physicochemical properties, such as soil organic matter content, mechanical composition, bulk density, pore distribution, greatly changed with the intensification of soil erosion and degradation, which could have significant effects on soil saturated hydraulic conductivity and soil erodibility.With the aggravation of erosion degree, soil sand content increased, clay content and organic matter content decreased, soil aggregate particles were broken, and aggregate stability decreased (Ai, 2013;Gao, Gu & Li, 2018).The saturated hydraulic conductivity of soil increased with the increase of soil organic matter and total porosity, but decreased with the increase of soil bulk density (Mao, Huang & Shao, 2019;Wang et al., 2016;Zhang, Zhao & Hua, 2009). Consistent with our first hypothesis, with the aggravation of black soils erosion and degradation, the saturated hydraulic conductivity of soil decreased, because soil organic matter and >0.25 mm water-stable aggregates content gradually decreased with soil erosion and degradation, which increased soil bulk density and led to the decrease of soil water permeability.Our results were consistent with the findings by Zhang et al. (2015) and Jing, Liu & Ren (2008).In addition, previous studies have shown that the destruction of soil structure will lead to the decrease of soil infiltration rate and present a significant positive correlation (Yang, Zhao & Lei, 2006;Yang, Wu & Zhao, 2009).We found that the saturated hydraulic conductivity of lightly eroded topsoil was significantly higher than that of moderately eroded topsoil, while there was no significant difference between the saturated hydraulic conductivity of moderately eroded topsoil and that of heavily eroded topsoil.This may be due to the significant destruction of soil structure from light to moderate erosion, resulting in a significant decrease in soil infiltration performance to a very low level.From moderate to heavy erosion, the damage degree of soil structure is reduced, so that the soil infiltration performance is not significantly reduced. Previous results have shown that compared with other models, the VG model has the highest accuracy for simulating soil water characteristic curve (Deng et al., 2016;Wang et al., 2018;Zhang et al., 2022).In this study, the VG model was used to simulate the water characteristic curve of black soils, and the fitting correlation coefficients (R 2 ) were all between 0.7594 and 0.9939.Therefore, this model can be effective in fitting the relationship between water content and water suction of black soils in slope farmlands with different erosion and degradation degrees.Compared with the VG model of lightly and moderately eroded soil, the R 2 value of the heavily eroded soil VG model was much lower.The reason may be that the sand content of heavily eroded soil is significantly higher than that of lightly and moderately eroded soil, so that the water holding capacity of heavily eroded soil is lower.The soil moisture content decreased significantly with the increase of water suction, resulting in a small change of soil moisture content with water suction in the middle and late centrifugation period.Therefore, the VG model R 2 value of heavily eroded soil with higher sediment content is smaller. With the aggravation of soil erosion and degradation degree, the difference between soil saturated water content h s and permanent wilting point h r decreased, as well as shape parameter n, indicating that soil water holding capacity was weakened.That might be because with the aggravation of soil erosion, the contents of soil organic matter and clay decrease and the content of sand increases, which eventually leads to the decrease of soil water holding capacity (Zhai et al., 2016).Ma, Fu & Luo (2017) indicated that the difference between soil saturated water content h s and permanent wilting point h r could characterize the water holding capacity of soil, with greater difference reflecting stronger water holding capacity of the soil.Dong et al. (2017) found that the larger the fitting parameter n of the VG model, the better the soil water retention capacity.Therefore, the water holding capacity of typical black soils decreases with the aggravation of black soils erosion and degradation. It has been found that the parameters a and n of VG model water characteristic curve can reflect the water holding capacity of soil, and the smaller the a value and the larger the n value, the better the water holding capacity of soil (Ma, Fu & Luo (2017); Pan, Lei & Zhang, 2007;Wang et al., 2018).Soil water holding capacity is mainly affected by soil basic physicochemical properties such as soil bulk density, organic matter content, soil texture, soil porosity and so on.Soil water holding capacity positively correlated with soil texture and porosity, and negatively correlated with soil bulk density (Liu et al., 2017a;Zhao, Zhou & Wu, 2002).The results of this study showed that the parameter a was negatively correlated with the contents of organic matter, silt, clay and >0.25 mm water-stable aggregates in soil (P < 0.05), but positively correlated with soil bulk density and sand content (P < 0.05).Parameter n was negatively correlated with soil bulk density and sand content (P < 0.01), and positively correlated with silt and >0.25 mm water-stable aggregates content (P < 0.01), but did not correlate with soil organic matter and clay contents.Our results provided evidence that soil erosion and degradation led to the decreases of the contents of soil organic matter and >0.25 mm water-stable aggregates, while resulted in the increase of soil bulk density, which consequently decreased soil water holding capacity.Therefore, soil bulk density and the contents of organic matter and >0.25 mm water-stable aggregates were the main factors affecting soil water holding capacity.In Table 3, in H erosion degradation class silt content was lower than L&M in two depths of samples.The reason may be that soil erosion will lead to the fragmentation of soil aggregate particles, and the small particles generated after the fragmentation of micro-aggregates are carried away by rain and wind, resulting in the imbalance of soil aggregates.The decrease of aggregate stability in turn resulted in the intensification of surface runoff and soil erosion, the decrease of soil particle content and coarser texture (Ai, 2013).Earlier studies have also showed that the clay and silt contents of black soils decreased with the increase of erosion degree (Zhai et al., 2016;Gao, Gu & Li, 2018), which supported our results. The specific water capacity when the soil water suction is 100 kPa (C (100)) can well measure the water supply capacity of soil (Liu et al., 2019).There is research indicated that specific water capacity is a useful index to measure the amount of water that can be released by soil to supply plant absorption (Liu et al., 2017a).The greater the specific water capacity, the stronger the soil water supply capacity and drought resistance.In this study, the specific water capacity of soil decreased with soil erosion and degradation, which indicated that soil erosion and degradation reduced the water supply capacity of soil, mainly due to the fact that soil with low degree of erosion and degradation has higher organic matter content, better soil structure and higher water absorption capacity, thus making the water supply capacity stronger (Ma et al., 2005). Finally, it should be noted that the accuracy of the VG model varies with changes in soil texture and physicochemical properties.For example, when the sand content in the soil is high (more than 50%), the VG model has poor fitting effect (Zhan, Li & Yu, 2022).Therefore, although our results have demonstrated that the VG model can be effective in simulating soil water characteristic curves of black soils in slope farmlands with different erosion and degradation degrees, caution is needed in applying and extending the conclusions drawn from this model. CONCLUSIONS The water characteristics of black soils in sloping farmlands with different degrees of erosion degradation have seldom reported in the past.Our study investigated the saturated hydraulic conductivity, water holding capacity and water supply capacity of black soils in lightly, moderately and seriously eroded slope farmlands, fitted them by the VG model, and explored their correlations with soil physicochemical properties.The results support our hypotheses that the aggravation of erosion and degradation of black soil in slope farmlands coarses soil texture, reduces the contents of organic matter and >0.25 mm water-stable aggregates, and increases soil bulk density, which leads to the decrease of soil saturated hydraulic conductivity and weakens soil water holding capacity and water supply capacity.These findings provide scientific basis and basic data for rational utilization of soil water, improvement of land productivity and prevention of soil erosion.Therefore, improving soil water characteristics of sloping farmland in black soil region can not only increase soil infiltration, reduce surface runoff and erosion, enhance water storage and moisture conservation capacity, but also provide theoretical basis for efficient use of agricultural water resources, which is of great significance for agricultural sustainable development in black. Figure 2 Figure 2 Saturated hydraulic conductivity of soils in slope farmlands with different erosion and degradation degrees.Note: L, lightly eroded soils; M, moderately eroded soils; H, heavily eroded soils.Values are means of three replicates ± SD and different lowercase letters indicate significant differences (Duncan's multiple range test, p < 0.05).Full-size  DOI: 10.7717/peerj.15930/fig-2 Figure 4 Figure 4 Specific water capacity of soil in slope farmlands with different erosion and degradation degrees.Note: L, lightly eroded soils; M, moderately eroded soils; H, heavily eroded soils.Values are means of three replicates ± SD and different lowercase letters indicate significant differences (Duncan's multiple range test, p < 0.05).Full-size  DOI: 10.7717/peerj.15930/fig-4 Table 1 Basic information of sampling sites. Table 2 Fitting parameters of VG model of water characteristic curve. Note:L, lightly eroded soils; M, moderately eroded soils; H, heavily eroded soils.Values are means of three replicates ± SD and different lowercase letters indicate significant differences (Duncan's multiple range test, p < 0.05). Table 4 Pearson correlation coefficient between soil physicochemical properties and water characteristic parameters.
2023-10-07T15:04:29.065Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "f5fce98130459806a92b0f3008801c6f44a0f82c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0e791ae8fa796c5418d89ad2ed6c61444c81ecce", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
15117562
pes2o/s2orc
v3-fos-license
Quantum Iterative Deepening with an Application to the Halting Problem Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. Introduction Classically, the status of any computation can be determined through a halt state.The concept of the halting state has some important subtleties in the context of quantum computation.The first one of these relates to quantum state evolution which needs to be expressed through unitary operators that represent reversible mappings.As a consequence, two successive states cannot be equal.Ekert draws attention to this fact stating that there are two possibilities to circumvent such an issue, namely [1]: either run the computation for some predetermined number of steps or alternatively employ a halt flag.This flag is then employed by a computational model to signal an end of the calculation.Traditionally, such a flag is represented by a halt bit which is initialized to 0 and set to 1 once the computation terminates.Accordingly, determining if a computation has finished is simply a matter of checking if the halt bit is set to 1, a task that can be accomplished through some form of periodic observation.Furthermore, undecidable problems, such as the famous Entscheidungsproblem challenge proposed by Hilbert in [2], require that computational models be capable of proceeding indefinitely, a procedure that can only be verified through a recurrent observation of a halt bit.Classical models of computation are able to execute undecidable problems since their formulation allows for the use of such a flag without affecting the overall result of the calculation.Undecidable problems are important because they demonstrate the existence of a class of problems that does not admit an algorithmic solution no matter how much time or spatial resources are provided [3].This result was first demonstrated by Church [4] and shortly after by Turing [5]. Problem Deutsch [6] was the first to suggest and employ such a strategy in order to describe a quantum equivalent of the Turing machine which employs a compound system |r expressed as a tensor of two terms, i.e. |r = |w |h , spanning a Hilbert space H r = H w ⊗ H h .The component |w represents a work register of unspecified length and |h a halt qubit which is used in an analogous fashion to its classical counterpart.However, Deutsch's strategy turned out to be flawed, namely suppose a unitary computational procedure C acting on input |x is applied d times and let d C,x represent the number of steps required for a procedure C to terminate on input x.Then it may be possible that there exist i and j for which d C,i < d < d C,j , ∀i = j.Now, lets consider what happens when we are in the presence of such a behaviour and |w is initialized as a superposition of the computational basis.Then those states which only require a number of computational steps less than or equal to d in order to terminate will have the halt qubit set to |1 , whilst the remaining states will have the same qubit set to |0 .This behaviour effectively results in the overall superposition state |w |h becoming entangled as exemplified by Expression 1, where we have assumed that w employs n bits. More generally, suppose that the compound system after the unitary evolution C d is in the entangled state represented by the right-hand side of Expression 2. Also, assume that the probability of observing the halting qubit |h with outcome k is P (k) = The projection postulate implies that we obtain a post observation state of the whole system as the one illustrated in Expression 3, where the system is projected to the subspace of the halting register and renormalized to the unit length [7]. 1 Consequently, observing the halt qubit after d computational steps have been applied, will result in the working register containing either: (1) a superposition of the non-terminating states; or (2) a superposition of the halting states.Such behaviour has the to dramatically disturb a computation since: (1) a halting state may not always be obtained upon measurement due to random collapse, if indeed there exists one; and (2) any computation performed subsequently using the contents of the working register |w may employ an adulterated superposition with direct consequences on the interference pattern employed.Roughly speaking, there is no way to know whether the computation is terminated or not without measuring the state of the machine, but, on the other hand, such a measurement may dramatically disturb the current computation. Current approaches to the quantum halting problem Ideally, one could argue that any von Neumann measurement should only be performed after all parallel computations have terminated.Indeed, some problems may allow one to determine max d C,|x , ∀|x ∈ |ψ , i.e. an upper-bound d C,x on the number of steps required for every possible input x present in the superposition.However, this procedure is not viable for those problems which, like the Entscheidungsproblem, are undecidable.Bernstein and Vazirani subsequently proposed a model for a universal quantum Turing machine in [8] which did not incorporate into its definition the concept on non-termination.Although their model is still an important theoretical contribution it is nonetheless only capable of dealing with computational processes whose different branches halt simultaneously or fail to halt at all.These same arguments were later employed by Myers in [9] who argues that it is not possible to precisely determine for all functions that are Turingcomputable, respectively µ-recursive functions, the number of computational steps required for completion.Additionally, the author also states that the models presented in [6] and [8] cannot be qualified as being truly universal since they do not allow for non-terminating computation.The work described in [8] is also restricted to the class of quantum Turing machines whose computational paths are synchronized, i.e. every computational path is synchronized in the sense that they must each reach an halt state at the same time step.This enabled the authors to sidestep the halting problem. Following Myers observation of the conflict between quantum computation and system observation a number of authors provided meaningful contributions to the question of halting in quantum Turing machines.Ozawa [10] [11] proposed a possible solution based on quantum nondemolition measurements, a concept previously employed for gravitational wave detection.Linden [12] argued that the standard halting scheme for Turing machines employed by Ozawa is unitary only for non-halting computations.Additionally, the author described how to build a quantum computer, through the introduction of an auxiliary ancilla bit that enabled system monitoring without spoiling the computation.However, such a scheme introduced difficulties regarding different halting times for different branches of computation.These restrictions essentially rendered the system classical since no useful interference occurred.In [13] expands the halting scheme described in [10] in order to introduce the notion of a well-behaved halting flag which is not modified upon completion.The author showed that the output probability distribution of monitored and non-monitored flags is the same.Miyadera proved that no algorithm exists capable of determining if an arbitrarily constructed quantum Turing machine halts at different computational branches [14].Iriyama discusses halting through a generalized quantum Turing machine that is able to evolve through states in a non-unitary fashion [15]. Measurement-based quantum Turing machines as a model for computation were defined in [16] and [17].Perdrix explores the halting issue by introducing classically-controlled quantum Turing machines [18], in which unitary transformations and quantum measurements are allowed, but restricts his model to quantum Turing machines that halt.Muller shows the existence of a universal quantum Turing machine that can simulate every other quantum Turing machine until the simulated model halts which then results in the universal machine halting with probability one [19,20]. The author describes operators that do not disturb the computation as long as the original input employed halts the calculation process.This requires presenting a precise definition of the concept of halting state.This notion results in a restriction where large parts of the domain are discarded since the definition requirements are not met. In [21] a method is presented for verifying the correctness of measurement-based quantum computation in the context of the one-way quantum computer described in [22].This type of quantum computation differs from the traditional circuit based approach since one-qubit measurements are performed on an entangled resource labeled as a cluster state in order to mold a quantum logic circuit on the state.With each measurement the entanglement resource is further depleted.These results are further extended in [23] in order to prove the universality of the computational model.Subsequently, in [24] these concepts were used in order to prove that one-way quantum computations have the same computational power as quantum circuits with unbounded fan-out.Perdrix [25] discusses partial observation of quantum Turing machines which preserve the computational state through the introduction of a weaker form of the original requirements of linear and unitary δ functions suggested by Deutsch in [6].Recently, [26] proved that measurements performed on the (X, Z)-plane of the Bloch sphere over graph states is a universal measurement-based model of quantum computation. Objectives In its seminal paper [6], Deutsch emphasizes that a quantum computer needs the ability to operate on an input that is a superposition of computational basis in order to be "fully quantum", When confronted with the halting issue Myers naturally raised the question if a universal quantum computer could ever be fully quantum?And how would such a computational model eventually function?We aim to provide an answer to these questions by developing an alternative proposal to quantum Turing machines based on production system theory.We introduce such a computational model in order to gain additional insight into the matter of halting and universal computation from a different perspective than that of the standard quantum Turing machine. As Miyadera stated, the notion of probabilistic halting in the context of quantum Turing machines cannot be avoided, suggesting that the standard halting scheme of traditional quantum computational models needs to be reexamined [14].Our proposal is essentially different from the ones previously discussed since it imposes a strict notion of how the computation is performed and progresses in the form of the sequence of instructions that should be applied.Our method evaluates d-length sequences of instructions representing different branches of computation, enabling one to determine which branches, if they exist, terminate the computation.Underlying the proposed model will be Grover's algorithm in order to amplify the amplitude of potential halting states, if such states exist, and thus avoiding obtaining a random projection upon measurement.As a result, we will focus on characterizing the computational complexity associated with such a model and showing that it does not differ from that of Grover's algorithm. With this work we are particularly interested in: (1) preserving the original principles proposed by Deutsch of linearity and unitary operators, in contrast with other proposals such as [25] and [15] which perform modifications to the underlying framework; (2) developing a model which considers all possible computational paths and (3) works independently of whether the computation terminates or not taking into account each possible computational path.Additionally, we will also consider some of the implications of being able to circumvent the halting problem.Computation universality is a characteristic attribute of several classical models of computation.For instance, the Turing machine model was shown to be equivalent in power to lambda calculus and production system theory.Accordingly, it would be interesting to determine what aspects of such a relationship are maintained in the context of quantum computation.Namely, we are interested in determining if it is possible to simulate a classical Turing machine given a quantum production system. Organisation The ensuing sections are organised as follows: Section 2 presents the details of production system theory, a computational model that will be employed to model tree search applied to the halting problem; Section 3 extends these ideas to a quantum context and discusses the details associated with our proposal for detection of quantum halting states.Section 4 demonstrates how our proposal can be employed in order to coherently simulate a classical Turing machine.We present the conclusions of this work in Section 5. Production System Review Our approach to the detection of quantum halting states requires fixing a computational model.This step is required since our proposal depends on the set of state transitions occurring during a computational process.We choose not to focus on Turing machines, instead our proposal will be formulated in terms of production system theory.This decision is based on the fact that the quantum Turing machine model was already well explored by Deutsch [6] as well as Bernstein and Vazirani [8].Furthermore, the combination of quantum concepts such as interference, entanglement and the superposition principle alongside the halting issue also contribute to make these models inherently complex.As a result, it is difficult to express elementary computational procedures.This behaviour contrasts with the simplicity of production system theory which allows for an elegant and compact representation of computations. Production system theory is also well suited to support tree search, a form of graph search from which we drew our initial inspiration.In addition, the classical counterparts of both models were shown to be equivalent in computational power [27].The production system is a formalism for describing the theory of computation proposed by Post in [28], consisting of a set of production rules R, a control system C and a working memory W .This sections reviews some of the most significant definitions that were proposed in [29], namely: Definition 1 Let Γ be a finite nonempty set whose elements are referred to as symbols.Additionally, let Γ * be the set of strings over Γ. Definition 2 The working memory W is capable of holding a string belonging to Γ * .The working memory is initialized with a given string, who is also commonly referred to as the initial state γ i . Definition 3 The set of production rules R has the form presented in Expression 4. Each rules precondition is matched against the contents of the working memory.If the precondition is met then the action part of the rule can be applied, changing the contents of the working memory. Definition 4 The tuple (Γ, S i , S g , R, C) represents the formal definition of a production system where Γ, R are finite nonempty sets and S i , S g ⊂ Γ * are, respectively, the finite sets of initial and goal states.The control function C satisfies Expression 5. The control system C chooses which of the rules to apply and terminates the computation when a goal configuration, γ g , of the memory is reached.If C(γ) = (r, γ ′ , {h, c}) the interpretation is that, if the working memory contains string γ then it is substituted by the action γ ′ of rule r and the computation either continues, c, or halts, h.Traditionally, the computation halts when a goal state γ g ∈ S g is achieved through a production, and continues otherwise.With these definitions in mind it becomes possible to develop a suitable model for a quantum production system.Namely, the complex valued control strategy would need to behave as illustrated in Expression 6 where C(γ, r, γ ′ , d) provides the amplitude if the working memory contains string γ then rule r will be chosen, substituting string γ with γ ′ and a decision s made on whether to continue or halt the computation. The amplitude value provided would also have to be in accordance with Expression 7, We will employ the notation described in [7] to describe the evolution of our quantum production system.Suppose we have a unitary operator C with the form presented in Expression 6. Operator C is responsible for a discrete state evolution taking the system from state γ to γ ′ through production r, expressed as γ ⊢ r γ ′ .We refer to the transition γ ⊢ r γ ′ as a computational step.The computation of a production system starting in an initial state i ∈ S i can be defined as a sequence of steps ∀ k and where d ∈ N represents the depth at which a solution state g ∈ S g can be found.In general, the unitary operator C can be perceived as applying a single computational step of the control strategy for a general production system.This notation is convenient since we are able to express the computation of a production system C up to depthlevel d as C d , i.e. a depth-limited search mechanism that mimics the behaviour illustrated in Figure 1. Quantum Iterative Deepening Universal models of computation are capable of calculating µ-recursive functions, a class of functions which allow for the possibility of non-termination.These functions employ a form of unbounded minimalization, respectively the µ-operator, which is defined in the following terms [3]: let k ≥ 0, c ∈ N,m ∈ N and g : N k+1 → N, then the unbounded minimization of g is function f : The unbounded minimization operator can be perceived as a computational procedure responsible for repeatedly evaluating a function with different inputs m until a target condition g(n, m) = c is obtained [30].However, as illustrated by Expression 8, there is no guarantee that the target condition will ever be met.Accordingly, it is possible to express the inner-workings of f as an iterative search that may never terminate, as illustrated in Algorithm 1.Notice that although µ-recursive functions employ a collections of variables belonging to the set of natural numbers, for practical purposes these values are restricted by architecture-specific limits on the number of bits available for representing the range of possible values. Algorithm 1 The classical µ-operator (adapted from [30]) m ← 0 return m From a quantum computation perspective, it is possible to perform a generic search for solution states through amplitude amplification schemes such as the one described by Grover in [31] and [32]. In this section we will discuss how to combine production system theory alongside the quantum search algorithm in order to develop a new computational model better suited to deal with the halting issue. The next sections are organized in the following manner: Section 3.1 presents the main details associated with Grover's algorithm; Section 3.2 proposes an oracle formulation of a the quantum production; Section 3.3 focuses on how to integrate these components into a single unified approach for a computational model based on production system theory capable of proceeding indefinitely without affecting the overall result of the computation; Section 4 presents a simple mapping mechanism of how our approach can be used to simulate a classical Turing machine. Grover's algorithm The quantum search algorithm employs an oracle O whose behaviour can be formulated as presented in Expression 9, where |w is a n-qubit query register, |h is a single qubit answer register.Additionally, f (w) is responsible for checking if w is a solution to a problem, outputting value 1 if so and 0 otherwise.In the context of this research we only consider deterministic functions. It is important to mention that we employed some care when defining the oracle in terms of registers |w and |h , in a similar manner to the quantum Turing machine model proposed by Deutsch.We deliberately chose to do so in order to establish some of the connections between the halting problem and the quantum search procedure.We may view the halting problem as one where we wish to obtain the computational basis present in |w which lead to goal states g ∈ S g where S g is defined as the set of halting states. Grover's algorithm starts by setting up a superposition of 2 n elements in register |w and subsequently employs a unitary operator G known as Grover's iterate [33] in order to amplify the amplitudes of the goal states and diminish those of non-goal states.The algorithm is capable of searching the superposition of 2 n elements by invoking the oracle O( √ 2 n ) times.The computational complexity of f should also be taken into consideration.Namely, assume that f takes time t f .Since Grover's algorithm performs √ 2 n oracle invocations then the total complexity will be O( √ 2 n t f ).This complexity still represents a speedup over an equivalent classical procedure since 2 n states would have to be evaluated independently.However, for a polynomial t f the overall complexity will be dominated by the dimension of the search space, i.e.O( √ 2 n ).For this reason, it is often assumed that f is computable in polynomial time.This assumption also makes such oracle models suitable to the complexity class NP which represents the class of languages that can be verified by a polynomial-time algorithm. In addition it is also possible that the space includes several solutions.Accordingly, let k represent the number of solutions that exist in the search space, then the complexity of the quantum search algorithm can be restated as O . Typically, k can be determined through the quantum counting algorithm described in [34] which also requires a similar time complexity.This means that before applying Grover's algorithm one must first determine the number of solutions.Overall, the time complexity of applying both methods sequentially remains the same.Once the algorithm terminates and a measurement is performed then a random collapse occurs, with high probability, amongst the amplified solutions.In the remainder of this work we gain generality by thinking in terms of the worst-case scenario where a single solution exists.However, the method described above could still be applied to the proposition that is described in the following sections.Grover's algorithm was experimentally demonstrated in [35]. Quantum Production System Oracle Is it possible to present an adequate mapping of our quantum production system that is suitable to be applied alongside Grover's algorithm?A comparison of Expression 6 and Expression 9 allows us to reach the conclusion that oracle O performs a verification whilst C focuses on executing an adequate state evolution.Therefore, we need to develop an alternate mechanism that behaves as if performing a verification.We can do so by focusing on one of the main objectives of production system theory, namely that of determining the sequence of production rules leading up to a goal state.Formally, we are interested in establishing if an initial state i ∈ S i alongside a sequence of d productions rules {r 1 , r 2 , • • • , r d } ∈ R leads to a goal state g ∈ S g .If the sequence of rules leads to a goal state, then the computation is marked as being in a halt state h, otherwise it is flagged to continue c.We can therefore proceed with a redefinition of the control function presented in Expression 6, as illustrated in Expression 10, which closely follows the oracle definition presented in Expression 9. Recall that the oracle operator is applied to register |r = |w |h .We choose to represent register |w as a tensor of two products, namely |w = |s |p , where |s is responsible for holding the binary representation of the initial state and |p contains the sequence of productions.Register |h is utilized in order to store the status s of the computation.Additionally, the revised version of the quantum production system C with oracle properties should also maintain a unit-norm, as depicted by Expression 11, ∀γ ∈ Γ * .For specific details surrounding the construction of such a unitary operator please refer to [36]. Any computational procedure can be described in production system theory by specifying an appropriate set of production rules that are responsible for performing an adequate state evolution.This set of production rules can be applied in conjunctions with a unitary operator C incorporating the behaviour mentioned in Expression 10 and Expression 11.In doing so we are able to obtain a derivation of a production system that can be combined with Grover's algorithm.From a practical perspective, we are able to initialize |p as a superposition over a set P R,d representing the sequence of all possible production rules ∈ R up to a depth-level d, as illustrated by Expression 12 and Expression 13.Implicit to these definitions is the assumption that set P has a total of b d possible paths. P R,d := {sequence of all possible production rules ∈ R up to a depth-level d} (12) Traditionally, throughout a computation set S i remains static in the sense that it does not grow in size.However, variable d is constantly increased in order to generate search spaces covering a larger number of states.As a result, given a sufficiently large depth value the number of bits required for P R,d will eventually surpass the amount of bits required to encode set S i .Accordingly, in the reasonable scenario where the number of bits required to encode the sequence of productions over P R,d is much larger than the number of bits required to encode the set of initial states S i , i.e. then the most important factor to the dimension of the search space will be the number of productions.For this reason, Grover's algorithm needs to evaluate a search space spanning roughly a total of b d paths.As a consequence, the algorithm's running time is O( √ b d ) which effectively cuts the search depth in half [37]. General procedure Any approach to a universal model of quantum computation needs to focus on two main issues, namely: (1) how to circumvent the halting problem and (2) how to handle computations that do not terminate without disturbing the result of the procedure.In the next sections we describe our general procedure.We choose to focus first on the second requirement in Section 3.3.1 given that it provides a basis for model development by establishing the parallels between µ-theory and production system theory.We then describe in Section 3.3.2how these arguments can be utilized in order to develop a computational model capable of calculating µ-recursive functions.We conclude with Section 3.3.3where we describe how our proposal is essentially non-different, complexity-wise, from the original Grover algorithm employed thus allowing for an efficient method satisfying both requirements. Parallels between µ-theory and production system theory Universal computation must allow for the possibility of non-termination, a characteristic that is is achievable through the ability to calculate µ-recursive functions.Therefore, the question naturally arises if it is possible to develop a quantum analogue of the iterative µ-operator?By itself µrecursive functions are not seen as a model of computation, but represent a class of functions that can be calculated by computational models.Accordingly, we are interested in determining if we are able to develop a quantum computational model, namely by employing the principles of production system theory, capable of calculating µ-recursive functions without affecting the end result. In order to answer this question we will first start by establishing some parallels between these concepts.Namely, consider the µ-operator presented in Algorithm 1 that receives as an argument a tuple (g, n, c) and a production system defined by the tuple (Γ, S i , S g , R, C).Accordingly, parameter g can be perceived as a control strategy C responsible for mapping a set of symbols Γ in accordance with a set of rules R. Variable n can be interpreted as an element of the set of initial states, i.e. i ∈ S i .The target condition c can be understood as the set of goal states S g .In addition, the unbounded minimization operator employs a parameter m that represents the first argument where the target condition is met.Analogously, from a production system perspective, variable m can be viewed as the first depth d where a solution to the problem can be found.Finally, the condition g(n, m) = c of the while loop is equivalent to applying the control strategy C at total of d times, i.e.C d , and evaluating if a goal state was reached. Iterative Search The fact that we are able to perform such mappings hints at the possibility of being able to develop our own quantum equivalent of the µ-operator based on production system fundamentals.All that is required is a while loop structure, mimicking the iterative behaviour of the µ-operator, that exhaustively examines every possibility for d alongside C, until a goal state is found.Since we need to evaluate if applying C d leads to a solution we can combine the quantum production system oracle presented in Expression 10 alongside Grover's iterate for a total of √ b d times in order to evaluate a superposition of all the available sequences of productions up to depth-level d, i.e.P R,d .After applying Grover's algorithm, we can perform a measurement M on the superposition, if the state ξ obtained is a goal state, then the computation can terminate since a solution was found at depth d. This process is illustrated in Algorithm 1 which receives as an argument a tuple (Γ, i, S g , R, C), where i is an initial state, i.e. i ∈ S i .We choose to represent our procedure as a form of pseudocode that is in accordance with the conventions utilized in [38], namely: (1) indentation indicates block structure, e.g. the set of instructions of the while loop that begins on line 5 consists of lines 6-14; (2) we use the symbol ← to represent an assignment of a variable; and (3) the symbol ⊲ indicates that the remainder of the line is a comment.Line 7 is responsible for applying the oracle alongside an initial state and all possible sequences of productions.Recall that register |h will be set if goal states can be reached.Line 9 is responsible for applying Grover's algorithm.If goal states are present in the superposition, then Grover's amplitude amplification scheme allows for one of them to be obtained with probability where k represents the number of solutions and θ = 2 arccos ( b d −k b d ).It is possible that state |ψ 2 contains a superposition of solutions.Therefore, measuring the system in Line 10 will result in a random collapse amongst these.If the measurement returns an halt state, then register |p will contain a sequence of productions leading to a goal state.Once the associated sequence has been Procedure 1 Quantum Iterative Deepening while true do Due to the probabilistic nature of Grover's algorithm there is also the possibility that the measurement will return a non halting state, even though |ψ 2 might have contained sequences of productions that led to goal states.This issue can be circumvented to a certain degree.Notice that the sequences expressed by P R,d+1 also contain the paths P R,d as subsequences.This means that when P R,d+1 is evaluated the iteration procedure has the opportunity to re-examine P R,d .As a result, operator C would have the chance to come across the exact subsequences that had previously led to goal states but that were not obtained after the measurement.Therefore, the control strategy would need to be modified in order to signal an halt state as soon as a solution is found, i.e. the shallowest production, independently of the sequence length being analyzed.With such a strategy the probability of obtaining a non-halting state in each unsought iteration level d . Each iteration of Algorithm 1 starts by building a superposition |p spanning the respective depth level.This means that the original interference pattern that was possibly lost upon measuring the system in the previous iteration is rebuilt and properly extended by the tensor product that is performed with the new productions.Because of this process the computation is able to proceed as if undisturbed by the measurement.Such a reexamination comes at a computational cost which will be shown to be neglectable in Section 3.3.3.This behaviour contrasts with the original approach discussed by Deutsch where: (1) a computation would be applied to a superposition |ψ ; (2) a measurement would eventually be made on the halt qubit collapsing the system to |ψ ′ and (3) if a goal state had not been obtained the computation would proceed with |ψ ′ . Complexity Analysis Algorithm 1 represents a form of iterative deepening search, a general strategy employed alongside tree search, that makes it possible to determine an appropriate depth limit d, if one exists [40].The first documented use of iterative deepening in the literature is in Slate and Atkin's Chess 4.5 program [41], a classic application of an artificial intelligence problem.Notice that up until this moment we had not specified how to obtain a value for depth d, this was done deliberately since the essence of µ-recursive functions relies in the fact that such a value may not exist.In general, iterative deepening is the preferred strategy when the depth of the solution is not known [40].Accordingly, the while loop will execute forever unless the state ξ in line 11, obtained after the measurement, is a goal state. Since we employ Grover's algorithm we do not need to measure specifically the halting register.Instead it is possible to perform a measurement on the entire Hilbert space of the system in order to verify if a final state is obtained.This type of a control structure is responsible for guaranteeing the same type of partial behaviour that can be found on the classical µ-operator.Consequently, Algorithm 1 also does not guarantee that variable d will ever be found, i.e. the search may not terminate.Line 8 of our algorithm uses the register |r = |w |h = |s |p |h described in Section 3.2. Quantum iterative deepening search may seem inefficient, because each time we apply C d to a superposition spanning P R,d we are necessarily evaluating the states belonging to previous depth levels multiple times, ∀d > 0. However, the bulk of the computational effort comes from the dimension of the search space to consider, respectively b d , which grows exponentially fast.As pointed out in [42] if the branching factor of a search tree remains relatively constant then the majority of the nodes will be in the bottom level.This is a consequence of each additional level of depth adding an exponentially greater number of nodes.As a result, the impact on performance of having to search multiple times the upper levels is minimal.This argument can be stated algebraically by analysing the individual time complexities associated with each application of Grover's algorithm for the various depth levels.Such a procedure is illustrated in Expression 14 which gives an overall time complexity of O( √ b d ) remaining essentially unchanged from that of the original quantum search algorithm. By employing our proposal we are able to develop a quantum computational model with an inherent speedup relatively to its classical counterparts.Notice that this speedup is only obtained when searching through a search space with a branching factor of at least 2 (please refer to [37] [36]).In addition, if the set of goal states is defined to be the set of halt states, then we are able to use our algorithm to circumvent the halting problem.Our method is able to do so since it can compute a result without the associated disruptions of Deutsch's model.We employ such a term carefully, since it may be argued that the measurements performed during computation will inherently disturb the superposition.This is not a problem if a halt state is found.However, if such a goal state is not discovered, we move on to an extended superposition through P R,d , representing an exponentially greater search space, where the states from the previous tree levels are included.Consequently, it becomes possible to recalculate the computation as if it had not been disturbed and without changing the overall complexity of the procedure. Turing machine simulation The approach proposed in this work allows for the possibility of non-termination, without inherently interfering with the results of the quantum computation.This hints at the possibility that our approach can be applied to coherently simulate classical universal models of computation such as the Turing machine.Specifically, we are interested in determining what would be needed for our model of an iterative quantum production system to simulate any classical Turing machine? We will begin by presenting a set of mappings between Turing machine concepts and production system concepts in a manner analogous to the trivial mapping described in [43].Both models employ some form of memory where the current status of the computation is stored.The Turing machine model utilises a tape capable of holding symbols.Each element of the tape can be referred to through a location.Tape elements are initially configured in a blank status, but contents can be accessed and modified through primitive read and write operations.These operations are performed by a head that is able to address each element of the tape.As a result, the memory equivalent of the production system, respectively, the working memory should convey information regarding the current head position and the symbols, alongside the respective locations, on the tape.In addition, the tape employed in Turing's model has an infinite dimension.Consequently, the working memory must also possess an infinite character. The Turing machine model utilises a δ function to represent finite-state transitions.The δ functions maps an argument tuple containing the current state and the input being read to tuples representing a state transition, an associated output and some type of head movement.This set of transitions can be represented as a table whose rows correspond to some state and where each column represents some input symbol.Each table entry contains the associated transition tuple representing the next internal state, a symbol to be written, and a head movement.Notice, that this behaviour fits nicely into the fixed set of rules R employed by production systems.Namely, δ's argument and transition tuples can be seen, respectively, as a precondition and associated action of a certain rule.Accordingly, for each table entry of the original Turing transition function it is possible to derive an adequate production rule, thus enabling the obtention of R. The only remaining issue resides in defining a control strategy C that mimics the behaviour presented in Expression 10.Consequently C needs to choose which of the rules to apply by accessing the working memory, determining the element that is currently being scanned by the head, and establishing if a goal state is reached after applying some specific sequence of R d rules.Once this is done, we are able to apply our iterative quantum production system to simulate the behaviour of a classical Turing machine.The δ-function conversion to an adequate database of productions is a simple polynomial-time procedure (please refer to [27] and [44] for additional details).In addition, it is important to mention that this approach will only provide a speedup if the Turing machine simulated allows for multiple computational branches.Otherwise, if the computation is not capable of being parallelized then we gain nothing, performance-wise, from employing quantum computation. Conclusions In this work we presented an approach for an iterative quantum production system with a built-in speedup mechanism and capable of the partial behaviour characteristic of µ-recursive functions. Our proposal makes use of a unitary operator C that can be perceived as mapping a total function since it maps for every possible input into a distinct output.However, operator C is employed in a quantum iterative deepening procedure that examines all path possibilities up to a depth level d until a solution is found, if indeed there exists one.Due to the probabilistic nature of Grover's algorithm there is always the possibility that, upon measurement, a non-terminating state is obtained.As a consequence, the procedure would iterate to an additional level of productions and could therefore fail to recognize a halting state.This issue can be overcome through the development of specific control strategies capable of signaling that an halting state has been found at the shallowest production yielding such a conclusion, independently of the sequence length being analyzed. Our model is able to operate independently of whether the computation terminates or not, a requirement associated with universal models of computation.As a result, it becomes possible for our model to exhibit partial behaviour that does not disturb the overall result of the underlying quantum computational process.This result is possible since: (1) Grover's algorithm effectively allows one to obtain halting states, if they exist, with high probability upon system observation; and (2) the overall complexity of this proposition remains the same of the quantum search algorithm.This procedure enables the development of verification-based universal quantum computational models, which are capable of coherently simulating classical models of universal computation such as the Turing machine. Definition 5 Let ζ d represent a sequence of productions leading up to a state s of length d.If s ∈ S g then such a sequence is also referred to as a solution. Figure 1 Figure 1 illustrates a production system with two production rules namely {p 0 , p 1 } that can always be applied.Thus the representation as a graph with a tree form, representing a search of depth level 3 with initial state is A and leaf {H, I, J, K, L, M, N, O}.Each depth layer d adds b d nodes to the tree, where b is the branching factor resulting from |R|, with each requiring a unique path leading to them.Therefore a total of b d possible paths exist, e.g.state J is achieved by applying sequence {p 0 , p 1 , p 0 }. Figure 1 : Figure 1: Tree structure representing the multiple computational paths of a probabilistic production system. [36]h {Mark if goal states exist at depth d} only to apply each production of the sequence in order to determine precisely what was the goal state obtained[36](Line 11).Otherwise, the search needs to be expanded to depth level d + 1 and the production evaluation process repeated from the start.As a result, this procedure requires building a new superposition of productions P R,d+1 each time a solution was not found in P R,d .
2015-02-06T09:28:02.000Z
2013-03-08T00:00:00.000
{ "year": 2013, "sha1": "24eea43a0d25a9c5dc7ce43c37ade739acd6fc79", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057309&type=printable", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "f0fb725808907931fcb039d474a1f764a0e91f63", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Medicine" ] }
4845913
pes2o/s2orc
v3-fos-license
Limited immune surveillance in lymphoid tissue by cytolytic CD4+ T cells during health and HIV disease CD4+ T cells subsets have a wide range of important helper and regulatory functions in the immune system. Several studies have specifically suggested that circulating effector CD4+ T cells may play a direct role in control of HIV replication through cytolytic activity or autocrine β-chemokine production. However, it remains unclear whether effector CD4+ T cells expressing cytolytic molecules and β-chemokines are present within lymph nodes (LNs), a major site of HIV replication. Here, we report that expression of β-chemokines and cytolytic molecules are enriched within a CD4+ T cell population with high levels of the T-box transcription factors T-bet and eomesodermin (Eomes). This effector population is predominately found in peripheral blood and is limited in LNs regardless of HIV infection or treatment status. As a result, CD4+ T cells generally lack effector functions in LNs, including cytolytic capacity and IFNγ and β-chemokine expression, even in HIV elite controllers and during acute/early HIV infection. While we do find the presence of degranulating CD4+ T cells in LNs, these cells do not bear functional or transcriptional effector T cell properties and are inherently poor to form stable immunological synapses compared to their peripheral blood counterparts. We demonstrate that CD4+ T cell cytolytic function, phenotype, and programming in the peripheral blood is dissociated from those characteristics found in lymphoid tissues. Together, these data challenge our current models based on blood and suggest spatially and temporally dissociated mechanisms of viral control in lymphoid tissues. Introduction CD4+ T cells are classically known to orchestrate immunity by providing helper functions to other arms of the immune system.However, CD4+ T cells can also exercise direct cell-to-cell mediated effector functions to control pathogens and tumors.Effector CD4+ T cells with cytolytic activity are generated during many acute viral infections and represent a front-line defense in the gut-intraepithelial compartment [1].Cytolytic CD4+ T cells directly recognize tumor cells and are involved in host protection against chronic viral infections, such as EBV and CMV [2].As HIV has evolved numerous ways to escape recognition by CD8+ T cells and neutralizing antibodies, it remains important to understand what role effector CD4+ T cells play to control HIV replication and limit disease progression. Several studies have demonstrated broad and vigorous responses of peripheral blood HIVspecific CD4+ T cells in untreated individuals with low HIV viremia [3][4][5][6][7][8].A growing body of evidence suggests that increased cytolytic and non-cytolytic mechanisms mediated by highly differentiated CD4+ T cells are linked to better HIV control.The emergence of HIV-specific CD4+ T cells with cytolytic properties during early infection has been associated with slower subsequent disease progression [7,9].Furthermore, HIV and SIV elite controllers demonstrate strong Nef-and Gag-specific CD4+ T cell responses in vivo, that can suppress viral replication in vitro in both macrophages and CD4+ T cells, potentially through cytolytic activity [9][10][11][12][13].Late differentiated CD4+ T cells can also mediate non-cytolytic functional mechanisms to limit CCR5-tropic HIV infection via autocrine production of β-chemokines CCL3 (MIP-1α), CCL4 (MIP-1β) and CCL5 (RANTES), through either blocking the interaction between gp120 and CCR5 or downregulating CCR5 from the cell surface [14].In this regard, CMV-specific CD4+ T cells are particularly known to be efficient producers of β-chemokines and are notably preserved in late HIV infection [15][16][17].Moreover, elite controllers possess CD4+ T cells resistant to CCR5-tropic HIV, potentially through the impact of increased MIP-1α and MIP-1β production by these cells [4,18]. The T-box transcription factors T-bet and eomesodermin (Eomes) regulate effector T cell differentiation and have been closely associated with the programming of effector functions of CD8+ T cells (reviewed in [19]) and cytolytic CD4+ T cells [11,20].These transcription factors also have a role in driving CD4+ T cell polarization, where T-bet is particularly known as the classical Th1 lineage defining transcription factor.Interestingly, T-bet directly regulates many of the genes encoding CD4+ T cell functions associated with HIV control, including prf1, gzmb, ccl3, ccl4, and ccl5 are directly regulated by T-bet [21].Eomes can compensate for the loss of T-bet to retain effector functions such as IFNγ production [22] and as such can function, to some extent, as a "paralog" to T-bet.We have previously shown that cytolytic CD8+ T cells have high levels of T-bet and intermediate expression of Eomes in peripheral blood [23]; however, it remains less clear if production of cytolytic granules or β-chemokine by human CD4+ T cells is similarly coupled to the expression levels of T-bet and Eomes at the single-cell level. While considerable evidence suggests that cytolytic and autocrine β-chemokine producing HIV-specific CD4+ T cells might be involved in control of HIV infection, it remains unclear whether these cells can mediate antiviral responses within lymphoid tissues, the primary site of HIV/SIV replication [24].Here, we sought to directly evaluate the role of effector CD4+ T cellmediated control of viral replication in HIV-infected lymph nodes (LNs) over the course of HIV infection.We identify a T-bet hi Eomes+ population that almost exclusively marks cytolytic and β-chemokine producing CD4+ T cells in blood of healthy and HIV-infected subjects.However, this population of CD4+ T cells is nearly absent in lymphoid tissues of chronically infected HIV-infected subjects, suggesting that cytolytic HIV-specific CD4+ T cells are not a major component of long term viral control in lymphoid tissues. Cytolytic molecules and β-chemokines are almost exclusively expressed by T-bet hi Eomes+ CD4+ T cells We have previously shown that cytolytic molecule (perforin and Granzyme B) expression in peripheral blood CD8+ T cells is strongly associated with high expression levels of T-bet (Tbet hi ) and intermediate expression of Eomes (Eomes+) [23].Independent of HIV infection status, we found that high levels of T-bet and Eomes expression in CD4+ T cells were also strongly linked to perforin and Granzyme B expression (Fig 1A and S1 Fig).Expression of perforin was highly enriched within the T-bet hi Eomes+ CD4+ T cell population (Fig 1B).Furthermore, the frequency of T-bet hi CD4+ T cells correlated strongly with the frequency of per-forin+ CD4+ T cells (Fig 1C).Using Imagestream analysis, we found, similar to CD8+ T cells [25], that T-bet was primarily localized in the nucleus in T-bet hi CD4+ T cells, while T-bet dim cells had T-bet more localized in the cytoplasm (Fig 1D), suggesting that T-bet may be transcriptionally active in T-bet hi CD4+ T cells.A previous murine study on gut intra-epithelial CD4+ T cells demonstrated that cytolytic CD4+ T cells can have a ThPok lo Runx3 hi phenotype [26].We found no association between low levels of ThPok expression and the frequency of Granzyme B+perforin+ CD4+ T cells in blood (S1 Fig) .Furthermore, not all Granzyme B+-perforin+ CD4+ T cells had a Runx3 hi phenotype (S1 Fig). In order to generate a spatial CD4+ T cell differentiation map, we next used non-linear dimensional reduction t-SNE analysis by embedding multi-parametric single cell information Peripheral blood T-bet hi CD4+ T cells are preserved in HIV-1 infection The presence of effector CD4+ T cells during acute and chronic HIV infection has been associated with slower disease progression [9].We first determined the direct impact of HIV-1 infection in vivo on the frequency of T-bet expressing CD4+ T cells in blood.HIV seronegative individuals (n = 10) that subsequently seroconverted were followed from before infection and longitudinally up to almost 800 days after the first positive HIV test using the RV217 cohort [27].All subjects were untreated during this period of time.Between the initial visit prior to infection and >1-year post-infection, the median frequency of the T-bet hi CD4+ T cell subset increased by almost 100% (Fig 2A).Further analysis from before infection and longitudinally during the acute phase of HIV-1 infection revealed that the frequency of T-bet hi CD4+ T cells sharply increased at the first sampled time-points after infection, transiently decreased after peak viremia, and subsequently slowly increased again (Fig 2B).The expression pattern of both T-bet and Eomes was further evaluated in a larger cross-sectional European cohort of HIV-seronegative and -positive subjects, where we found that the median frequency of Tbet hi Eomes+ cells in memory CD4+ T cells was 6.7 times higher in HIV-infected chronic progressors compared to healthy subjects (Fig 2C).Furthermore, T-bet hi Eomes+ cells were significantly less frequently infected in vitro than other conventional memory (CD45RO+) CD4+ T cells (S4 Fig) , supporting previous studies showing that terminally-differentiated and β-chemokine producing CD4+ T cells contain less HIV-DNA molecules than more early-differentiated CD4+ T cell subsets [15,28].We further conducted longitudinal assessments after ART was initiated and found that the frequency of T-bet hi Eomes+ cells decreased after ART.However, this drop was associated with an accumulation of naïve cells following ART initiation (S5 Fig), suggesting that the change in frequency is a consequence of naïve cells redistributing in blood after ART.Indeed, other data demonstrated that the absolute counts of T-bet hi Eomes + cells did not change longitudinally after ART (S5 Fig) , indicating that HIV replication and rebound have a low impact on the effector CD4+ T cell population in blood.Altogether, these data suggest that T-bet hi CD4+ T cell levels are preserved in blood during HIV infection. Elevated effector functions by HIV-specific CD4+ T cells during acute infection We next used the RV217 acute cohort to monitor the frequency of T-bet and effector characteristics of HIV-Gag-specific CD4+ T cells from before infection and with close intervals during acute infection.Early after infection, we found a sharp increase in HIV-specific IFNγ+ CD4+ T cell responses that later declined (S6 Fig) Dissociated presence of cytolytic CD4+ T cells in peripheral blood compared to lymph nodes While peripheral blood CD4+ T cells with an effector profile are preserved in chronic HIV infection (Fig 2 ), it remains uncertain if similar cells are present in lymphoid tissues, such as lymph nodes (LNs), where they could interact with HIV-infected target cells.To determine this, we first assessed the maturation status (CCR7, CD27, CD45RO) of bulk CD4+ T cells in blood and LNs (S7A Fig) from HIV-infected and-uninfected individuals.We found that HIV chronic progressors (CP) and ART-treated (ART+) subjects particularly had elevated levels of transitional memory (TM) cells in LNs (Fig 3A), which correlated with the expansion of T follicular helper cells (Tfh) (S7B Fig) as previously described for HIV-infected subjects [29,30].In contrast, HIV-seronegative subjects had higher levels of TM cells in blood (Fig 3A).Independently of HIV-infection status, both effector memory (EM) cells and terminally-differentiated (TD) cells were significantly reduced in LNs compared to blood (Fig 3A). Given the decreased levels of terminally-differentiated CD4+ T cells in LNs, we next assessed if this was further associated with fewer T-bet hi Eomes+ effector CD4+ T cells in lymphoid tissues compared to blood.Accordingly, we found very few T-bet hi Eomes+ CD4+ T cells in LNs for all groups (Fig 3B ).The few Eomes+ cells present in LNs had greatly reduced T-bet expression in contrast to blood-derived CD4+ T cells (Fig 3B).We generally found significantly lower frequencies of perforin+ and Granzyme B+ CD4+ T cells in LNs compared to peripheral blood independent of HIV-infection status (Fig 3C).Perforin expression was consistently lower for both LN and blood CD4+ T cells compared to matched LN and blood CD8+ T cells (S7C Fig) .Furthermore, Granzyme B showed low co-expression with perforin and these cells were, instead, skewed towards a CD27+ profile in LNs (Fig 3D).We next employed tSNE analysis by merging single-cell CD4+ T cell data together with matched blood and LN for an HIV+ CP with high levels of Tfh cells in LNs and cytolytic CD4+ T cells in blood.Based on a set of memory and cytolytic markers, the tSNE analysis confirmed a dissociation between effector and Tfh cells in the blood and LN.(Fig 3E).Independent of HIV-infection status, these data together demonstrate an inherent lack of the blood-like T-bet hi Eomes+ CD4+ T cell population, and thereby cytolytic cells, in LNs. Paucity of effector CD4+ T cell responses in HIV-infected lymph nodes Based on these premises, we next assessed the functional properties of CD4+ T cells following stimulation with overlapping Gag and Env peptide stimulations.Paired blood and LN samples from HIV+ CPs and ART+ subjects were collected for these assessments.We found higher frequencies of Gag-specific CD4+ T cells in LNs compared to blood for both CPs and ART+ subjects (Fig 4A).Additionally, the magnitude of the Env-specific CD4+ T cell response was only Next, we examined whether LN CD4+ T cells could upregulate perforin after stimulation as a measure of their cytolytic potential [31,32].In general, few LN Gag-specific CD4+ T cells upregulated perforin after peptide stimulation, whereas Gag-specific CD4+ T cells in the blood expressed perforin to a higher degree (Fig 4C).The limitation of LN Gag-specific CD4+ T cells to express perforin correlated with lower levels of T-bet hi cells in LN compared to blood (Fig 4D).No difference between LN and blood was found in ART+ subjects for either perforin or T-bet hi cells suggesting that cytolytic Gag-specific CD4+ T cell responses are only present in blood during viremic episodes. We previously observed that β-chemokines are predominantly produced by the Tbet hi Eomes+ population ( HIV elite controllers and acute seroconverters lack effector-like CD4+ T cell responses in lymph nodes HIV-1 elite controllers usually demonstrate a vigorous and highly polyfunctional effector HIV-specific CD4+ T cell response in peripheral blood [3][4][5][6]8].We obtained LNs from a cohort of HIV elite controllers (n = 9) and assessed whether those subjects had any evidence of cytolytic CD4+ T cell activity in LNs.Most Gag-specific CD4+ T cells in blood from elite controllers tended to express moderate levels of T-bet but still showed an enrichment in the Tbet hi Eomes+ subset compared to LN Gag-specific CD4+ T cells ( .Furthermore, early cycling (Ki-67+) CD4+ T cells, indicative of HIV-specific T cells [33,34], in blood showed tendencies of higher perforin, Granzyme B and T-bet hi expression than LN Ki-67 + CD4+ T cells in the acute/early HIV seroconverters (Fig 5D ).Taken together, our data provide evidence of dissociated effector-like HIV-specific CD4+ T cell responses in LN compared to blood suggesting spatial and temporal dissociated mechanisms of viral control in LNs of HIV elite controllers and early HIV seroconverters.bet, Eomes, Granzyme B, perforin, CXCR5 and PD-1 expression on gated bulk CD4+ T cells.The naïve cluster (green) is based on high CCR7 and low CD45RO intensity; the Tfh cluster (red) on high intensity of PD-1 and CXCR5; and the effector cluster (orange) on high T-bet and perforin expression intensity.After separating out the merged LN and PB single CD4+ T cell data, a lack of Tfh cells was apparent in PB and effector CD4+ T cells in the LN (lower right tSNE plots).Median and IQR are shown for all scatter plots.Mann-Whitney tests were performed to compare differences between two unmatched groups, and Wilcoxon matched-pairs single rank tests between matched samples; Ã P < 0.05, ÃÃ P < 0.01 and ÃÃÃ P < 0.001.All data-points are derived from the North-American and Mexico cohort.We further explored CXCR5 expression on degranulating HIV-specific CD4+ T cells and found that a majority of the LN CD107a+ Gag-specific CD4+ T cells were either CXCR5+ or CXCR5 hi (Fig 6B ) and had minimal perforin expression independent of ART status (Fig 6C).Blood CD107a+ Gag-specific CD4+ T cells were instead mostly CXCR5-and demonstrated some perforin expression (Fig 6C).Furthermore, analysis on IFNγ/CD107a/TNF-producing Gag-specific CD4+ T cells confirmed similar characteristics, indicating that HIV-specific CD4 + T cells in LNs are highly skewed towards a CXCR5+ or CXCR5 hi phenotype (S9C Fig). LN CD4+ T cells form poorly organized synaptic interfaces and exhibit delayed granule release Cytolytic T cell activity requires granule release through a stable immunological synapse [38].As such, we directly visualized the structure of CD4+ T cell immunological synapses formed by degranulating LN and blood CD4+ T cells in real time by exposing CD4+ T cells to planar lipid bilayers containing fluorescently labelled soluble ICAM-1 and -CD3 antibodies.In order to maximize the vertical resolution at the T cell/bilayer interface, we used total internal reflection fluorescence (TIRF) microscopy to examine the structure of the T cell contact surface and the appearance of CD107a in real time [39][40][41].Analysis of LN and blood CD4+ T cells isolated from either chronically HIV-infected or uninfected individuals revealed four different groups of synapse formation: CD4+ T cells capable of establishing mature cytolytic synapses, but do or do not release granules, CD4+ T cells able to release granules without establishing cytolytic synapses, and, finally, CD4+ T cells that neither form the synapses nor release granules (Fig 7A).CD4+ T cells from both blood and LN of chronically infected individuals contained a considerable fraction of cells that are capable of releasing granules without formation of mature synapses.However, the fraction of blood CD4+ T cells demonstrating both synapse formation and granule release was considerably larger than for LN CD4+ T cells in HIVinfected individuals (Fig 7B).Furthermore, the difference between LN and blood CD4+ T cells in the kinetics of granule release, a parameter linked to the efficiency of T cell cytolytic activity [42], was even more striking.Regardless of the infection status, blood-derived CD4+ T cells were able to release granules almost twice as fast as LN-derived CD4+ T cells (Fig 7C and 7D).Not surprisingly, the HIV-specific CD4+ T cell clone AC25 showed superior ability to establish mature immunological synapse and rapid degranulation (Fig 7A -7D).Altogether, these data suggest that while some LN CD4+ T cells are able to degranulate, few have the necessary Discussion The progressive decline of memory CD4+ T cells is a general hallmark of chronic HIV disease in most individuals.Accumulating evidence suggests that HIV-specific CD4+ T cells play an important role in host defense mechanisms against the virus.Late stage maturation and increased effector functions, including β-chemokine production and cytolytic activity, by peripheral blood CD4+ T cells have been linked to a lower degree of viral susceptibility and slower disease progression in HIV-infected individuals [9][10][11][12][13][14][15][16][17][18].However, whether CD4+ T cells derived from lymphoid tissues from HIV-infected individuals possess such effector functions has heretofore remained unclear.Indeed, lymphoid tissues are known from studies on HIV-uninfected subjects to harbor CD4+ T cells with entirely different plasticity and functional characteristics than CD4+ T cells in blood [43,44].In this study, we demonstrate that CD4+ T cells expressing high levels of T-bet and Eomes represent the major producers of classical T cell effector functions, such as cytolytic molecules and β-chemokines.However, this effector CD4+ T cell subset is rare in HIV-infected or -uninfected LNs; a finding similar to that recently described for CD8+ T cells [32].Importantly, HIV elite controllers also have lower frequencies in vivo effector CD4+ T cell responses in LNs during chronic disease, implicating a spatial or temporal displacement in maintaining control of HIV replication in LNs.Together these results indicate that effector CD4+ T cells are unlikely to play a major direct role in control of HIV replication within lymphoid tissue. Synergy between the transcription factors T-bet and Eomes has been intensively studied in the context of effector CD8+ T cell differentiation, and studies have suggested that the interplay between T-bet and Eomes in CD4+ T cell differentiation drives the cytolytic CD4 T cell program [11].T-bet was originally defined as the master regulator of CD4+ T cell Th1 polarization and expression of effector cytokines such as IFNγ and TNF [21].Eomes is also expressed in CD4+ T cells, where it can compensate for loss of T-bet to retain IFNγ production and thereby Th1 polarization [22].Recent studies have implicated that Eomes is particularly important for driving cytolytic CD4+ T cell responses in vivo [45].In addition, other transcription factors, such as ThPok, have been associated with cytolytic CD4+ T cell activity in mice [26].Similar to our previous observations in CD8+ T cells [23,46], we found a very strong correlation between high expression levels of T-bet and perforin within Eomes+ peripheral blood human CD4+ T cells.However, Eomes+T-bet dim/-CD4+ T cells had low perforin expression and poor β-chemokine production, indicating that Eomes alone is not sufficient to maintain effector functions in human CD4+ T cells.Together these data suggest that CD4+ T cells with high levels of T-bet are the preeminent producers of cytolytic molecules and β-chemokines, representing a multifunctional effector CD4+ T cell population in human blood. Increased frequencies of cytolytic CD4+ T cells have been described previously in the context of HIV infection [10], but it remains unknown to what degree they are preserved.Through access to cohorts with time points from before and after HIV infection we now show that the frequencies of effector CD4+ T cells increase longitudinally after HIV infection and that in peripheral blood these cells are not depleted during chronic HIV infection.The reason for their preservation is probably multifaceted, but our data revealed that T-bet hi Eomes+ CD4+ T cells were less susceptible to in vitro HIV infection compared to conventional CD45RO+ CD4+ T cells, potentially due to autocrine β-chemokine production [15].Another explanation however could simply be that effector CD4+ T cells are preserved during HIV infection because they are not present in lymphoid tissues where the vast majority of viral replication takes place [24].Most T cells have been thought to recirculate between blood and lymphoid or non-lymphoid tissues [47,48], but recent human data on multiple organs implicate that terminally-differentiated (effector) CD4+ T cells are primarily present in peripheral blood and highly-vascularized tissues [44,49].Our data support these previous studies on human organ donors, demonstrating that few CD27-terminally-differentiated CD4+ T cells are present in LNs, independent of HIV infection.As such, it remains possible that CD4+ T cell tissue compartmentalization and trafficking properties of unique subsets could partly explain why certain CD4+ T cells are depleted or preserved during acute and chronic HIV disease. Peripheral blood only contains 2% of the total number of lymphocytes, where a predominant fraction resides in lymphoid tissues [50,51].However, most of our understanding on CD4+ T cell function, phenotype, and transcriptional characteristics in HIV infection comes from peripheral blood T cells.Knowledge of LN CD4+ T cell responses against HIV is limited, despite the fact that lymphoid tissues serve as key sites for the dissemination and long-term maintenance of HIV replication and the viral reservoir [52].Previous studies have established that CD8+ T cells in HIV-infected LNs generally express low-levels of cytolytic molecules [53,54], but similar studies have not been conducted on effector LN CD4+ T cells.While CD8+ T cells seem to express some effector molecules in HIV-infected LNs [32], we found few Tbet hi Eomes+ CD4+ T cells, which is associated with minimal expression of cytolytic molecules, β-chemokines and IFNγ.The presence of cytolytic Gag-specific CD4+ T cell responses was only present in viremic subjects, which indicates that ongoing viral replication is necessary to maintain the presence of cytolytic CD4+ T cells against HIV in blood.Furthermore, we also identify that LN CD4+ T cells form impaired immunological synapses and release cytolytic granules with slower kinetics, consistent with inefficient potency of target cell destruction.HIV-specific CD4+ T cells were frequently present in LNs; however, these responses appeared to be monofunctional for TNF production or CD107a upregulation and less expression of IFNγ.The lack of β-chemokine production in LNs is of particular interest as several studies have proposed that such functional properties provide resistance to HIV infection [15,16].Autocrine β-chemokine production by CD4+ T cells could still be a protective mechanism prohibiting productive infection by HIV in peripheral blood, but likely not in LNs. The magnitude of HIV-specific CD4+ T cell responses was highest in blood during peakviremia and coincided with increased T-bet, perforin and MIP-1α production.In contrast, very few cytolytic CD4+ T cells were present in LNs during acute/early HIV infection.Notably, previous studies have shown that LN T cells egress from tissues to enter the blood stream after the acquisition of cytolytic activity [55].Despite the finding that LCMV-specific LN CD8+ T cells can degranulate during acute infection, they do kill target cells less efficiently [55].This notion is also in line with our chronic HIV data, showing a disassociation between LN and blood for different subsets of CD4+ T cells that can degranulate.LN CD4+ T cells can degranulate, but these cells express low levels of cytolytic markers and form impaired immunological synapses.Instead they seem to be skewed towards a CXCR5+ phenotype, suggesting that unique Tfh-subsets degranulate and may secrete factors of unknown nature.Future studies should clarify the role and content of granules in LN CD4+ T cells, as such responses seem to be differently regulated and not contain cytolytic molecules or β-chemokines. Numerous studies have identified that HIV elite controllers possess CD4+ T cell responses with higher polyfunctionality, Gag specificity, proliferative capacity, cytolytic activity and β-chemokine production [4,5,8,9,18,56].Surprisingly, we found a distinct dissociation of effector CD4+ T cell responses in LNs compared to PB in HIV elite controllers.The higher abundance of effector CD4+ T cell responses in peripheral blood could suggest that such responses, together with other factors, are maintaining control in the blood circulation, but not in lymphoid tissues.Furthermore, previous studies have established that HIV-specific CD4+ T cells are generating pressure on the virus, based on the fact that escape mutations emerge within MHC-II-restricted epitope regions [57][58][59].Given the limited expression of cytolytic molecules for both HIV-specific CD4+ and CD8+ T cells in LNs [32], it remains possible that non-cytolytic factors are involved in control mechanisms of HIV and could generate viral escape mutants in LNs [60].Indeed, elegant studies combining CD8+ T cell depletion and ART to study the lifespan of SIV-infected cells longitudinally found that depletion of CD8+ T cells had minimal effect on the death rate of virus infected cells, indicating that CD8+ T cells must act via other mechanisms than direct lysis of cells [61,62].Similar studies have also demonstrated that the death-rate of cells infected with wild type and escape mutant viruses are not different, indicating that non-cytolytic functions can drive viral escape and is associated with control of lentiviruses [63].Modeling of escape has also shown that non-cytolytic functions can select for escape variants, although more slowly than cytolytic responses [64].Further studies should clarify if cell antiviral functions [65] produced by HIV-specific T cells can generate selective pressure on the virus and lead to elite control of HIV in lymphoid tissues. In conclusion, we have determined that CD4+ T cells with elevated levels of T-bet and Eomes represent a pleiotropic effector population that is present and preserved in HIVinfected blood.This unique CD4+ T cell population is however almost excluded from HIVinfected LNs.As a consequence, LN CD4+ T cells generally possess lower effector functions, independent of HIV disease status or infection.Our data provide evidence of lack of association between effector-like HIV-specific CD4+ T cell responses in LNs and blood, suggesting dissociated mechanisms of viral control in LNs. Ethical statement Written informed consent was obtained from all study participants and blood samples were acquired with institutional review board approval at each collecting institution Samples Mesenteric, iliac, inguinal and cervical lymph node (LN) biopsies and peripheral blood were collected from individuals classified as HIV − (n = 51), HIV + chronic and naïve to ART (n = 71), HIV + chronic on ART (n = 25), HIV + elite controllers (n = 18), and HIV + acute/early seroconverters (n = 17).Recruitment occurred at six sites: University of Pennsylvania (Penn) (HIV − blood and iliac LNs; HIV + ART + /ART − blood and iliac LNs); INER-CIENI in Mexico City (HIV + ART + /ART − blood and matched cervical LNs); Case Western Reserve University (HIV − mesenteric LNs); University of California, San Francisco (UCSF) (HIV + EC inguinal LNs); Karolinska Instituet (HIV − and HIV + ART + /ART − blood) and the RV217 early capture HIV cohort (longitudinal HIV − and HIV + ART − blood samples).Subject grouping, tissue types, and clinical parameters are summarized in S1 Table .All clinical characteristics of the RV217 cohort has been described previously [66].HIV − LN samples were obtained from following procedures/conditions: mesenteric LNs (patients undergoing abdominal surgery) and iliac LNs (kidney transplant donors).Sample size was based on the availability of biological samples rather than a pre-specified effect size and was not blinded to the persons executing experiments. Specimens Peripheral blood mononuclear cells (PBMCs) were collected from whole blood or leukapheresis products using Ficoll-Hypaque (GE Healthcare) density gradient centrifugation.Lymph node mononuclear cells (LNMCs) were isolated by manual disruption or using a gentleMACS tissue dissociator.PBMCs and LNMCs were cryopreserved and stored at -140˚C for further usage in all experiments. Flow cytometry (FACS) For all experiments, human cryopreserved PBMCs and LNMCs were thawed and rested for at least 1 hour in complete media (R10), containing RPMI-1640 media supplemented with 10% FBS, 1% L-glutamine, and 1% penicillin/streptomycin.Cells were then washed with PBS, prestained for chemokine receptors/ adhesion molecules at 37˚C for 10 minutes, stained with LIVE/DEAD Aqua (Invitrogen) for 10 minutes, and surface stained with an optimized antibody cocktail for 20 further minutes.Cells were then washed with FACS buffer (PBS containing 0.1% sodium azide and 1% BSA), fixed, and permeabilized using the Cytofix/Cytoperm Buffer Kit (BD Biosciences) or the FoxP3 Transcription Factor Buffer Kit (eBioscience).An optimized antibody cocktail was then added for 1 hour to detect intracellular/-nuclear markers.Cells were fixed in PBS containing 1% paraformaldehyde (Sigma-Aldrich) and stored at 4˚C.All samples were acquired within 3 days using an LSRII (BD Biosciences) and data was analyzed with FlowJo software (version 9.8.8 or higher, TreeStar). For sorting experiments, PBMCs and LNMCs were thawed and rested overnight.Cells were next day stained in 15-mL conical tubes following the procedure described above with higher concentrations of antibodies (i.e.not diluted 1:50 in FACS buffer).Cells were then washed with PBS and suspended in R10 media.Sorting was carried out using a FACSAriaII (BD Biosciences) instrument. Single-cell gene expression analysis PBMCs and LNMCs from HIV-uninfected subjects (n = 2) were thawed, rested overnight and stained as described above.SEB stimulated CD107+ CD4+ T cells were single-cell index sorted into individual wells of a 96-well PCR plate according to the gating strategy depicted in S10A Fig.Each well contained 5 μL lysing buffer, consisting of 4.725 μL of DNA Suspension Buffer (10 mM Tris, pH 8.0, 0.1 mM EDTA; TEKnova), 0.025 μL of 20 U/μL SUPERase (Ambion) and 0.25 μL of 10% NP40 (Thermo Scientific).After FACS sorting, PCR plates were frozen and kept in -80˚C until usage. Similar to before [35], PCR plates were thawed and pre-heated for 90 seconds at 65˚C.Reverse Transcription Master Mix (Fluidigm) was added to each well and plate was placed into a thermocycler for reverse transcription (25˚C for 5 min, 42˚C for 30 min, 85˚C for 5 min).Next, pre-amplification mix, consisting of pooled mixture of all primer assays (500nM), 5× PreAmp Master Mix (Fluidigm) and H 2 O, was added to each well and run on a thermocycler (95˚C for 5 min followed by 18 cycles: 96˚C for 5 sec 60˚C for 6 min).Exonuclease mixture, containing of Exonuclease I (New England BioLabs), 10× Exonuclease I Reaction Buffer (New England BioLabs) and H 2 O was added to each well to remove excessive primers and the plate was run on a thermocycler (37˚C for 30 min, 80˚C for 15 min).Each well was diluted (1:4) with DNA Suspension Buffer (10 mM Tris, pH 8.0, 0.1 mM EDTA; TEKnova).Distinct primer assays were generated by adding individual primer pairs (5μM) together with a mix of 2x Assay Loading Reagent (Fluidigm) and 1x DNA suspension buffer to each well of a new plate.A "sample PCR plate" was created by dispensing a sample master mix, containing 2x Sso Fast EvaGreen Supermix with Low ROX (Bio-Rad), 20x DNA Binding Dye Sample Loading Reagent (Fluidigm) and H2O to each well.Pre-amplified samples were added to each well of the sample PCR plate.Control line fluid (Fluidigm) was injected to the 96.96Dynamic Array chip (Fluidigm) and the chip was primed using an IFX Controller HX.After priming the chip, primer assays and sample mix was added to the unique detector inlets of the chip and transferred to the IFX Controller HX for loading the mixtures.The chip was then transferred to a Biomark HD instrument (Fluidigm) and run using the GE Fast 96x96 PCR+Melt v2.pcl program.All primers were purchased from IDT and assay efficiency as well as melting and amplification curves for each assay were evaluated beforehand on separate Biomark HD runs and using qPCR. All data were pre-analyzed with the Real-time PCR analysis software (Fluidigm), and Linear (Derivative) and User (Detectors) were used as settings to generate Ct values.A Ct value of 25 was used as limit of detection (LOD).The relative gene expression was defined a log 2 value using the formula: log 2 = LOD-Ct.All downstream analysis of the gene expression data, including tSNE analysis, were performed using R Studio or Graph Pad Prism v7.0 (GraphPad). Isolation of CD4+ T cells for image analysis Frozen PBMCs or LNMCs of uninfected or HIV-infected individuals (n = 2) were thawed and incubated overnight in complete media (RPMI supplemented with 10% of fetal bovine serum, penicillin/streptomycin and glutamine) at 37˚C.CD4+ T cells were then purified by negative immunomagnetic sorting using MACS technology CD4+ T cell purification kit according to the manufacturer's instructions.The cells were transferred into the assay buffer (20 mM HEPES, pH 7.4, 137 mM NaCl, 2 mM Na 2 HPO 4 , 5 mM D-glucose, 5 mM KCl, 1 mM MgCl 2 , 2 mM CaCl 2 , and 1% human serum albumin) and kept at +4˚C (1-2 hours) prior to the analysis.The human HIV-specific CD4 + CTL clone AC-25 that recognizes PEVIPMFSALSEGATP (PP16) peptide from HIV Gag protein was used as a positive control [39,68]. TIRF microscopy TIRF images were acquired with an Andor Revolution XD system equipped with Nikon TIRF-E illuminator, 100/1.49NA objective, Andor iXon X3 EM-CCD camera, objective heater, and a piezoelectric motorized stage with Perfect Focus.The cells were combined with Alexa 568-labeled anti-CD107a antibody Fab fragments at a final concentration of 4 μg/ml and injected into the channel of ibidi slides containing a lipid bilayer.The images of the bilayers were then recorded for 30 minutes at a rate of one image per minute.The resulting images were analyzed using the MetaMorph imaging suite. Image analysis To determine the parameters of cell-bilayer interactions we chose only CD4+ T cells productively interacting with the bilayers.That was determined by accumulation of anti-CD3 antibodies at the interface and confirmed by morphology analysis of the cells observed in the transmitted light images.Clustered cells and visibly damaged or apoptotic cells were excluded from analysis. The efficiency of ICAM-1accumulation for selected cells was measured determining the average fluorescence intensity of accumulated Cy5-labeled ICAM-1 molecules at the cellbilayer interface over background fluorescence outside of the contact area in close proximity to the cell.Cell was discerned to accumulate ICAM-1, if the signal-to-background ratio was at least 1.3.If accumulated ICAM-1 molecules formed a ring structure that was observed on at least two consecutive images, we determined that the cell was developing pSMAC.The granule release was evaluated by measuring the average fluorescence intensity of accumulated Alexa 568-labeled anti-CD107a antibody Fab fragment at the T cell/bilayer interface over the background outside the contact area but in close proximity to the cell.Cells with the ratio of Alexa 568 signal to background of at least 1.3 were designated as degranulating cells.All analyzed cells demonstrated one of the following four degranulation patterns: forming or not forming a pSMAC and degranulating or not at the same time.For every sample, we determined the fractions of the total cells corresponding to each pattern of degranulation.To quantify the kinetics of granule release at cell-bilayer interface for all of the degranulating cells, we determined the earliest time point when degranulation was observed. Imagestream analysis For imaging flow cytometry, human PBMC were stained with DAPI (5 ug/mL) for 5 min and 100,000 events were imaged on an ImageStreamX (Amnis Corp).Images were captured using a 60x lens with an extended depth of field option using Inspire software (Amnis Corp).Antibody capture beads (BD Biosciences) were used as individual compensation tubes for each fluorophore.Masking functions within IDEAS 5.0 were used to define nuclear and cytoplasmic T-bet and Eomes as previously described [25]. HIV infection assay DHIV3 plasmid was provided by Dr. Edward Barker [71] (Rush University, Chicago, IL).VSV-G pseudotyped viruses were produced as previously described [72].PBMCs were stimulated with SEB for 3 days before addition of the pseudotyped virus.Gag-p24 (clone KC57, Beckman Coulter) expression was determined by flow cytometry on day 5 post-infection. Statistical analysis Mann-Whitney or unpaired t-tests were used to compare differences between unmatched groups, and Wilcoxon matched-pairs single rank or paired t-tests were used to compare differences between matched samples.Spearman or Pearson tests were used for correlation analyses.Non-parametric or parametric tests were conducted based on normal distribution of the data points (Shapiro-Wilk normality test).All analyses were performed using R studio, Graph Pad Prism v7.0 (GraphPad).FlowJo and Cell ACCENSE (Automatic Classification of Cellular Expression by Nonlinear Stochastic Embedding) analyses were used to conduct multivariate tSNE analysis on the single-cell flow data sets [73].Sorting scheme to isolate single CD107+ SEB stimulated CD4+ T cells (left).Right plot shows the correlation between single and bulk relative gene expression value (Log2) for every assessed gene.Genes with an average of Log2>1.5 was used for further analysis in Fig 6 .A non-parametric Spearman test was used for the correlation analysis.(PDF) Fig 1 . Fig 1. Cytolytic CD4+ T cells express high levels of T-bet and Eomes in blood.(A) Representative flow cytometry plots of Granzyme B and perforin expression in CD4+ T cells for an HIV-infected and-uninfected subject.The distribution of Granzyme B+perforin+ (red) and Granzyme B-perforin-(blue) CD4+ T cells are shown for T-bet and Eomes expression.(B) Frequency of perforin+ CD4+ T cells within the T-bet hi Eomes+ and T-bet dim/-population (left) and T-bet hi Eomes+ within the perforin+ or perforin-population for HIV-infected and-uninfected subjects.(C) Correlation between the frequency of perforin+ and T-bet hi CD4+ T cells.(D) Imagestream analysis on T-bet hi and T-bet dim CD4+ T cells.Overlays of fluorescent channels for DAPI (nuclear) and T-bet, showing where in the cells T-bet are localized.The frequency of nuclear, nuclear/cytoplasmic and cytoplasmic localization for T-bet hi and T-bet dim CD4+ T cells are shown in the before-after graphs.(E) tSNE plots based on 30,000 live CD4+ T cells that were merged from three HIV-uninfected subjects with detectable cytolytic CD4+ T cells.The tSNE clustering is based on CD45RO, CD27, CCR7, T-bet, Eomes, Granzyme A, Granzyme B and perforin expression intensity.The red gate indicates the identified "effector" cluster with overlapped expression of cytolytic markers as well as T-bet and Eomes.(F) Flow plots of MIP-1α production using media (NC) and aCD3-CD28 stimulations for T-bet hi and Eomes+ CD4+ T cells, as well as correlation between the frequency of T-bet hi Eomes+ and MIP-1α+ CD4+ T cells following aCD3-CD28 stimulations.Median and IQR are shown for all scatter plots and Mann-Whitney tests were performed to compare differences between groups; ÃÃÃ P < 0.001.A non-parametric Spearman test was used for the correlations analysis.All data are derived from the North-American cohort.https://doi.org/10.1371/journal.ppat.1006973.g001 Fig 2 . Fig 2. Temporal dynamics of T-bet hi expression and effector HIV-specific CD4+ T cell responses following HIV infection in blood.(A) Frequencies of T-bet hi CD4+ T cells before and first sample taken 1 year after HIV infection (n = 10).(B) Longitudinal changes of T-bet hi CD4+ T cells before and subsequently after HIV infection.Every individual is depicted with black connecting lines and red line indicate the estimated mean value (linear regression) over time (n = 10).(C) Frequency of T-bet hi expression on memory CD4+ T cells in HIV-and HIV+ chronic progressors (CP) (D) T-bet, (E) perforin and (F) MIP-1α expression by IFNγ+ Gag-specific CD4+ T cells before and subsequently following HIV infection (n = 10).The colored lines represent each subject and their frequencies of T-bet hi , perforin+ and MIP-1α+ Gag-specific CD4 + T cells over time.A Wilcoxon or Mann-Whitney test was performed to compare the difference between groups; Ã P < 0.05.Longitudinal data-points are derived from the RV217 cohort and cross-sectional data from the European cohort.https://doi.org/10.1371/journal.ppat.1006973.g002 The very early response at peak viremia was associated with an effector HIV-specific CD4+ T cell response with elevated levels of T-bet (Fig 2D), perforin (Fig 2E) and MIP-1α (Fig 2F) production.Following peak viremia, however, T-bet expression decreased rapidly, concordant with a subsequent decline of perforin and MIP-1α production by HIV-specific CD4+ T cells (Fig 2D-2F) demonstrating a close temporal relationship between effector CD4+ T cell responses and T-bet expression following HIV infection. Fig 3 . Fig 3. Phenotypic, cytolytic and transcriptional differences between LN and blood CD4+ T cells in HIV-infected and -uninfected individuals.(A) Flow cytometry plots (HIV-infected CP) and scatter plots for naïve and memory subsets of LN and peripheral blood (PB) CD4+ T cells in HIV-infected and -uninfected subjects.(B) Flow cytometry plots (HIV-infected CP) showing the lack of T-bet hi Eomes+ CD4+ T cells in LNs.Corresponding scatter plots demonstrating the frequency of T-bet hi cells of memory (non-naïve) CD4+ T cells (top) and frequency of Eomes+ cells of T-bet hi CD4+ T cells (bottom) for matched LN and PB.(C) Flow plots (HIV-infected CP) showing the lack of Granzyme B+perforin+ CD4+ T cells in LNs and scatter plots with the frequency of LN and PB perforin+ cells of memory CD4+ T cells (top).Frequencies of Granzyme B+ cells of perforin+ CD4+ T cells (bottom) for matched LN and PB.(D) Flow plots (HIV-infected CP) and scatter plots showing the distribution of CD27+ cells within the Granzyme B+ CD4+ T cell compartment for matched LN and PB.(E) The distribution of different populations in the tSNE space is based on 30.000 live CD4+ T cells that were merged from LN and PB from a HIV-infected CP with detectable levels of cytolytic cells in the PB and LN Tfh cells.The tSNE clustering is based on CD45RO, CD27, CCR7, T- Fig 2 ) , suggesting that they may not be expressed by LN CD4+ T cells given the general lack of high T-bet expression.We further explored this phenomenon on SEB-and CMV-specific CD4+ T cells and found significantly lower frequencies of MIP-1α + SEB stimulated (Fig 4E) and CMV-specific (Fig 4F) CD4+ T cells in LNs compared to blood.Notably, LN CMV-specific CD4+ T cells had poor co-expression of IFNγ and TNF in contrast to their counterparts in blood (S9A Fig), indicating an inherent lack of multiple effector functions also by CMV-specific CD4+ T cells in LNs. Fig 4 .Fig 5 . Fig 4. Functional characteristics of polyclonal and virus-specific effector CD4+ T cell responses in HIV-infected LNs and blood.(A) Flow cytometry plots (HIV ART+ subject) and plots for matched HIV-Gag or-Env-specific CD4+ T cell responses in HIV-infected LN and PB.(B) Flow cytometry plots (HIV-infected CP) showing the negative control (NC) and Gag-specific response of LN CD4+ T cell response.The high abundance of CD107a (red) that is not co-expressed with IFNγ or TNF is illustrated in this example.SPICE analysis of functional combination between LN (red) and PB (red-gray) Gagspecific CD4+ T cell responses for HIV-infected CPs and ART+ subjects.(C) Flow plots (HIV-infected CP) of Gag-specific CD4+ T cell response (red) from LN and PB in relation to CD27 and perforin expression.Graphs represent the frequency of (C) perforin+ and (D) T-bet hi cells between LN and PB Gag-specific CD4+ T cells.(E) Flow plots (HIV-infected CP) of MIP-1α versus IFNγ and TNF production for LN and PB SEB stimulated CD4+ T cells.Corresponding plots showing the frequency of MIP-1α+ SEB stimulated CD4+ T cells (top) and MIP-1α+ of IFNγ/TNF/CD107a/MIP-1α+ SEB stimulated CD4+ T cells (bottom).(F) Flow plots (HIV-infected CP) of MIP-1α versus IFNγ and TNF production for LN and PB CMV-specific CD4+ T cells and corresponding graphs showing the frequency of MIP-1α+ CMV-specific CD4+ T cells (top) and MIP-1α+ of IFNγ/TNF/CD107a/MIP-1α + CMV-specific CD4+ T cells (bottom).Median and IQR are shown for all bar plots.Permutation test was performed between the pie charts.Wilcoxon Fig 6 . Fig 6.Functional and transcriptional differences between degranulating cells in LN and blood.(A) Flow cytometry plots (HIV-uninfected subject) of matched LN and PB CD107a+ SEB stimulated CD4+ T cell responses.Graphs are showing the frequencies of LN (top) and PB (bottom) CD107a+ SEB stimulated CD4+ T cell responses for CXCR5-, CXCR5+ and CXCR5 hi cells.(B) Flow plots (HIV-infected CP) illustrating the expression of CD107a+ ( red) Gag-specific CD4+ T cells in relation to CXCR5 between LN (top) and PB (bottom).(C) Corresponding plots from the same subject showing the expression of CD107a+ (red) Gag-specific CD4+ T cells in relation to CXCR5 and perforin for LN (top) and PB (bottom).Graphs represent the frequency of CD107a+ cells within the CXCR5+, CXCR5 hi and perforin+ compartment for LN (top) and PB (bottom) Gag-specific CD4+ T cells.(D) Biomark analysis illustrating the tSNE distribution of single SEB stimulated CD107+ CD4+ T cells from LN (black) and PB (gray).Individual graphs represent the relative Log2 expression of different markers being significantly different (P<0.05) between blood and LN CD107+ cells.Non-parametric Kruskal Wallis test with Dunn's multiple comparison test was performed to determine significant differences between groups; Ã P < 0.05, ÃÃ P < 0.01 and ÃÃÃ P < 0.001.All data-points are derived from the North-American and Mexico cohort.https://doi.org/10.1371/journal.ppat.1006973.g006 Fig 7 . Fig 7. Synaptic interface, degranulation pattern and kinetics of degranulation by LN and blood CD4+ T cells.Freshly isolated CD4+ T cells were exposed to planar lipid bilayers containing fluorescent labeled ICAM-1 and anti-CD3 antibodies and the structure of the T cell/bilayer interface and pattern and kinetics of degranulation were analyzed by TIRF microscopy.(A) Representative images of T cell/bilayer interface demonstrating patterns of accumulation and segregation of TCR and integrin molecules and the appearance of CD107a proteins at the interface are shown.The cells fall into 4 different groups: 1) T cells demonstrating the formation of classical cytolytic synapse containing central (cSMAC) domain, peripheral ring junction (pSMAC), and centrally located CD107a indicating granule release (solid red); 2) T cells showing the formation of cytolytic synapse without detectable granule release (solid blue); 3) T cells that are characterized by aggregation of TCR and integrin molecules without formation of mature cytolytic synapse, but with detectable granule release (red stripes); 4) T cells that display overlapping aggregates of TCR and integrin molecules without granule release (blue stripes).(B) Diagrams showing representation of LN-and PB-derived T cells of HIV-infected ART-and uninfected individuals with different structure of synaptic interfaces and patterns of granule release displayed in panel (A); HIV-specific cloned CD4+ T cells AC25 are shown for comparison reasons.(C) Time-dependent changes of the structure of T cell/bilayer interfaces and appearance of the released granules for representative T cells derived from LN and PB are shown.(D) Quantitation of the kinetics of granule release by LN (closed circles) and PB (open circles) CD4+ T cells isolated from HIV-infected ART-(red circles) and uninfected (black circles) individuals.The kinetics of granule release by HIV-specific T-cell clone AC25 is shown for comparison reasons (depicted by open blue circles).Each individual circle designates first appearance of detectable granule release by individual cells.Median and IQR are shown for all scatter plots.Mann-Whitney tests were performed to compare differences between indicated groups of T cells; Ã P < 0.05, ÃÃ P < 0.01.https://doi.org/10.1371/journal.ppat.1006973.g007 : University of Pennsylvania (IRB#809316, IRB# 815056), Human Subjects Protection Branch (RV217/ WRAIR#1373), The United Republic of Tanzania Ministry of Health and Social Welfare (MRH/R.10/18/VOLL.VI/85), Tanzanian National Institute for Medical Research (NIMR/HQ/ R.8aVol.1/2013),Royal Thai Army Medical Department (IRBRTA 1810/2558), Uganda National Council for Science and Technology-National HIV/AIDS Research Committee (ARC 084), Uganda National Council of Science and Technology (HS 688), The Swedish Regional Ethical Council (Stockholm, Sweden 2009/1485-31, 2013/1944-31/4, 2014/920-32, 2012/999-32 and 2009/1592-32), INER-CIENI Ethics Committee and the Federal Commission for the Protection against Sanitary Risk (COFEPRIS), the Institutional Review Boards of the Case Western Reserve University (CWRU) and Cleveland Clinic Foundation (CCF).All human subjects were adults.This study was conducted in accordance with the Declaration of Helsinki. bet expression in non-cytolytic (blue) and cytolytic (red) CD4+ T cells.(PDF) S2 T-bet and Eomes co-expression patterns within naïve and memory CD4+ T cell compartments.Flow plot (left) and scatter plots (right) showing the frequency of T-bet and Eomes expressing populations within naïve (CCR7+CD27+CD45RO-), transitional memory (TM; CCR7-CD27+CD45RO+), effector memory (EM; CCR7-CD27-CD45RO+) and terminally-differentiated (TD; CCR7-CD27-CD45RO-) CD4+ T cells.(PDF) S3 Fig. β-chemokines are primarily produced by CD4+ T cells expressing high levels of Tbet.(A) Representative flow plot showing the co-expression between MIP-1α and MIP-1β for SEB stimulated CD4+ T cells.(B) Co-expression pattern for T-bet versus MIP-1α, MIP-1β, and IFNγ by SEB stimulated (top) and CMV-specific (bottom) CD4+ T cells.Frequencies of MIP-1α/β+ for T-bet hi , T-bet dim and T-bet -CD4+ T cells following SEB-and CMV-pp65 stimulations.(C) Expression of MIP-1α and IFNγ by negative control (NC), CMV-specific and α-CD3-CD28-specific CD4+ T cells in a donor with a T-bet hi Eomes+ population (top) versus a subject with no such population (bottom).Non-parametric Kruskal Wallis test with Dunn's multiple comparison test was conducted to compare differences between groups; ÃÃ P < 0.01 and ÃÃÃ P < 0.001.(PDF) S4 Fig. T-bet hi Eomes+ CD4+ T cells are not productively infected by HIV.(A) Representative flow plots demonstrating Gag-p24 detection in CD25+ and CD45RO+ CD4+ T cells activated for 3 days with SEB.(B) Flow plot (left) and before-after graph (right) of Gag-p24+ cells within the Eomes+ compartment.A Wilcoxon test was performed to compare the difference between groups; ÃÃÃ P < 0.001.(PDF) S5 Fig. T-bet hi Eomes+ CD4+ T cell counts are not impacted by ART.(A) The impact of ART over time on the frequency of T-bet hi (red) and Eomes+ (blue) CD4+ T cells is (B) associated with a redistribution of naïve CD4+ T cells.Ã in Fig A annotates differences between different time-points after and before ART (week 0).Absolute counts of T-bet hi (red) and Eomes + (blue) CD4+ T cells are not impacted by ART both during (C) short-term and (D) longterm ART.A non-parametric Spearman test was used for the correlation analysis.Wilcoxon tests were performed to compare differences before and after ART.Non-parametric Kruskal Wallis test was conducted to compare differences at multiple time-points before and after ART.Median and IQR are shown for all groups.Ã P < 0.05; ÃÃ P < 0.01 and ÃÃÃ P < 0.001.(PDF) S6 Fig. Dynamics of IFNg+ HIV-specific CD4+ T cell responses following HIV infection.IFNγ+ Gag-specific CD4+ T cells before and subsequently following HIV infection.The colored lines represent each subject and their absolute frequencies over time.(PDF) S7 Fig. Gating scheme, Tfh correlations and comparison of CD4+ and CD8+ T cell expression of cytolytic molecules.(A) Representative gating scheme of blood (top) and LN (bottom) CD4+ T cells from the same HIV+ subject.(B) Flow plots (left and middle) demonstrating the expansion of Tfh cells in an HIV-infected CP and the corresponding TM phenotype (middle).The non-parametric Spearman correlation plot demonstrate that increased frequencies of TM cells are associated the expanded pool of Tfh cells in HIV-infected LNs.(C) Frequency of perforin expression on memory CD4+ and CD8+ T cells from LN and blood in HIV-seronegative subjects (black), HIV+ CPs (red), and HIV+ ART+ subjects Wilcoxon tests were performed to compare differences between CD4+ and CD8+ T cells.ÃÃ P < 0.01 and ÃÃÃ P < 0.001.(PDF) S8 Fig. Functional characteristics of polyclonal CD4+ T cell responses in HIV-infected LNs and blood.SPICE analysis of functional combination between LN (red) and PB (red-gray) SEB stimulated CD4+ T cell responses for HIV-infected CPs and ART+ subjects.Median and IQR are shown for all bar plots.Permutation test was performed between the pie charts.Wilcoxon matched-pairs single rank tests were performed to compare differences between two matched groups; Ã P < 0.05.(PDF) S9 Fig. Functional characteristics of polyclonal and virus-specific LN and blood CD4+ T cells.(A) IFNg and TNF co-expression by CMV-specific CD4+ T cells in LNs (left) and blood (PB; right).(B) Degranulation (CD107) by T-bet+ SEB stimulated CD4+ T cells in LN and blood.(C) Before-after graphs showing the frequency of CXCR5+ (left) and CXCR5 hi HIVspecific CD4+ T cells between matched LN and blood.Wilcoxon matched-pairs single rank tests were performed to compare differences between two matched groups; Ã P < 0.05; ÃÃ P < 0.01 and ÃÃÃ P < 0.001.(PDF) S10 Fig. Sorting and single-cell gene expression procedure of degranulating CD4+ T cells.
2018-04-27T01:50:10.869Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "95680559543a48e7b820124bb806188446c7152b", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1006973&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7206a3a317ccf23c9bf991d0b9996d1dd1e35aa5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268639750
pes2o/s2orc
v3-fos-license
Vascular smooth muscle cells in response to cholesterol crystals modulates inflammatory cytokines release and promotes neutrophil extracellular trap formation Background The formation and accumulation of cholesterol crystals (CC) at the lesion site is a hallmark of atherosclerosis. Although studies have shown the importance of vascular smooth muscle cells (VSMCs) in the disease atherosclerosis, little is known about the molecular mechanism behind the uptake of CC in VSMCs and their role in modulating immune response. Methods Human aortic smooth muscle cells were cultured and treated with CC. CC uptake and CC mediated signaling pathway and protein induction were studied using flow cytometry, confocal microscopy, western blot and Olink proteomics. Conditioned medium from CC treated VSMCs was used to study neutrophil adhesion, ROS production and phagocytosis. Neutrophil extracellular traps (NETs) formations were visualized using confocal microscopy. Results VSMCs and macrophages were found around CC clefts in human carotid plaques. CC uptake in VSMCs are largely through micropinocytosis and phagocytosis via PI3K–AkT dependent pathway. The uptake of CC in VSMCs induce the release inflammatory proteins, including IL-33, an alarming cytokine. Conditioned medium from CC treated VSMCs can induce neutrophil adhesion, neutrophil reactive oxygen species (ROS) and neutrophil extracellular traps (NETs) formation. IL-33 neutralization in conditioned medium from CC treated VSMCs inhibited neutrophil ROS production and NETs formation. Conclusion We demonstrate that VSMCs due to its vicinity to CC clefts in human atherosclerotic lesion can modulate local immune response and we further reveal that the interaction between CC and VSMCs impart an inflammatory milieu in the atherosclerotic microenvironment by promoting IL-33 dependent neutrophil influx and NETs formation. Supplementary Information The online version contains supplementary material available at 10.1186/s10020-024-00809-8. Introduction Atherosclerosis is a chronic inflammatory disease characterized by a dysfunctional interplay between immune response and lipids.The progression of atherosclerotic plaque involves interaction between vascular smooth muscle cells (VSMCs), endothelial cells and immune cells, leading to recruitment of neutrophils, monocytes, lymphocytes, mast cells and release of inflammatory proteins (Libby 2002). Cholesterol crystals (CC), a potential biomarker of atherosclerosis is the most abundant crystalline structure found in the atherosclerotic plaque (Abela et al. 2009).Since its discovery, CC have been observed in many disease conditions, including kidney diseases, gallstone formation, periodontitis, myocardial infarction, ocular diseases, abdominal aortic aneurism, and even central nervous anomalies (Sedaghat and Grundy 1980;Scolari et al. 2000;Li et al. 2017;Chen and Popko 2018).In human atherosclerotic plaque, CC formation is influenced by many physiochemical factors including pH and temperature of the milieu, presence of excess of calcium, saturation of free cholesterol, extent of hydration of free cholesterol molecules (Nidorf et al. 2020).Lethality of CC has been proven by many studies.Needle shaped and plate shaped CC which expands in volume and pierces the blood vessel can cause rupture of the plaque with its sharp edges (Abela 2010a, b;Lim et al. 2011).Moreover, plaque rupture can facilitate breaching of CC and formation of CC emboli, leading to obstruction of medium to small arteries (Pervaiz et al. 2018).Furthermore, CC emboli are known to induces tissue inflammation and distal ischemia (Pervaiz et al. 2018) indicating that CC plays a lethal role in the later stages of atherosclerosis (Abela et al. 2009).CC are present in all stages of atherogenesis and its appearance in the atherosclerotic lesions coincides with the first appearance of inflammatory cells (Duewell et al. 2010).CC represents an endogenous danger signal that induce plaque inflammation via NLRP3 mediated IL-1β production during atherogenesis (Abela et al. 2009;Abela 2010a, b;Duewell et al. 2010;Varghese et al. 2016).The formation and accumulation of CC are known to promote inflammation by activation of NLRP3 inflammasome and release of potent pro-inflammatory cytokine, IL-1β in macrophages causing cellular dysfunction and acute vascular injury (Duewell et al. 2010;Lopez-Castejon and Brough 2011).In vascular endothelium, CC increase vascular permeability by disrupting adherens junctions and promote interaction between endothelial cells and monocytes (Mani et al. 2018;Pichavaram et al. 2019).Studies from Benoit group have demonstrated that CC formation is initiated in the tight association with the death of intralesional SMCs during the transition of fatty streak to fibroatheroma of human atherosclerotic plaque.Human VSMCs loaded with cholesterol can produce CC by inducing collagen dependent changes in cholesterol metabolism and autophagy flux (Ho-Tin-Noé et al. 2017). CC are known to act as endogenous danger signal to induce sterile inflammatory immune response in atherosclerosis.IL-33, an IL-1-related cytokine is known to augment sterile inflammation in cardiovascular diseases including atherosclerosis (Liew et al. 2016;Sun et al. 2021).An elevated concentration of serum IL-33 and its receptors were reported in high-risk carotid plaques, which was also associated to infiltration of inflammatory cells (Stankovic et al. 2019).Also, increased level of IL-33 was associated with thrombotic complications and progression of carotid atherosclerosis in patients with rheumatoid arthritis (Dhillon et al. 2013).Neutrophils are one of the key target cells for IL-33 that exacerbates tissue damage and inflammation by triggering smooth muscle cells death in advanced atherosclerosis (Alves et al. 2010, Silvestre-Roig et al. 2019).Moreover, neutrophil extracellular traps (NETs) expelled from suicidal neutrophils have been detected in atherosclerotic lesion of human and mice and are shown to induce proinflammatory response by inducing the activation of endothelial cells (Megens et al. 2012;Quillard et al. 2015;Warnatsch et al. 2015).Although neutrophils infiltration destabilizes the atherosclerotic plaque (Silvestre-Roig et al. 2019), the interplay between VSMCs, neutrophils and IL-33 in atherosclerosis is not fully understood.Also, little is known about the immunomodulatory effect of CC uptake in VSMCs on atherosclerotic microenvironment.Thus, the aim of the present study was to investigate the cellular and molecular mechanism of CC uptake in VSMCs from healthy donors and to examine the immunomodulatory effect of CC uptake in VSMCs on neutrophil proinflammatory profile including neutrophil extracellular traps (NETs) formation. Preparation of monohydrate cholesterol crystals (CC) CC were prepared as described elsewhere (Samstad et al. 2014).Briefly, 100 mg of ultrapure cholesterol (Sigma Aldrich) was dissolved in 50 ml of 1-propanol.The solution was mixed with sterile water (1:1.5)and allowed to rest for 20 min for the crystals to stabilize.All the steps were performed in sterile condition and at room temperature.1-Propanol was removed by evaporation and CC were resuspended in PBS/0.05%human serum albumin (HSA).CC were further tested for endotoxin contamination using Limulus amebocyte lysate assay (LAL) and LPS levels were found to be below the detection limit (0.01EU/ml).The prepared CC were of size range 1-2 μm and stored at 4 °C. Immunohistochemistry Human carotid atherosclerotic plaques were obtained from patients (n = 3) undergoing carotid endarterectomy at the Division of Thoracic and Cardiovascular Surgery, Örebro University Hospital, Sweden.Informed written consent was obtained from all participants.The use of human atherosclerotic lesions was ethically approved by Uppsala Regional Ethical Board (Dnr 2015/532).The study was ethically performed as per to the guidelines of Helsinki Declaration.The tissues were formalin fixed and paraffin embedded at the Department of Pathology, Örebro University Hospital, Sweden.The paraffin embedded tissues were sectioned (4 µm), dewaxed and rehydrated using Tissue Clear (Sakura, Alphen aan den Rijn, The Netherlands) and decreasing concentration of ethanol.The tissue sections were pretreated with Diva decloaking buffer pH 6 (Biocare Medical, Pacheco, CA, USA) or 10 min at 110 °C to facilitate antigen retrieval.The further staining was performed using primary antibodies against smooth muscle actin (SMA; M0851; Dako; 1:500) and CD68 (NCL-L-CD68, Novacastra, Newcastle, UK; 1:50), diluted in Da Vinci Green (Biocare Medical) and incubated for 1 h at room temperature.The slides were then incubated with AP polymer detection kit for visualization of SMA and CD68 using Warp Red.The tissue sections were counterstained with Hematoxylin, dehydrated in ascending grade of ethanol prior to mounting with Pertex mounting medium (Histolab, Gothenburg, Sweden).The slides were scanned using the digital scanner Panoramic 250 Flash III (3DHistech, Budapest, Hungary) and micrographs were analyzed from the Case Viewer using autosettings (Open source version 2.0 software; 3DHistech; https:// www.3dhis tech.com/). Source of data An existing single cells RNA (scRNA) sequencing dataset from human carotid atherosclerotic plaques from Slenders and coworkers (Slenders et al. 2022) via the PlaqView website (http:// plaqv iewv2.uvadc os.io/) was used to examine the expression levels of IL-33 and its receptor IL1RL1 and generate Uniform manifold approximation and projection (UMAP) visualization plot of IL-33 and IL1RL1 gene expression in different cell types of carotid plaque. Cell cultures Human umbilical vein endothelial cells (HUVECs) were purchased from Thermo Fisher Scientific (Waltham, MA, USA) and cultured in VascuLife basal medium supplemented with VascuLife VEGF Life factors kit (Lifeline Cell Technology, Frederick, MD, USA) including antibiotics [Penicillin (0.1 U/ml) + Streptomycin (100 ng/ml)] (Thermo Fisher Scientific, Waltham, MA, USA).Human aortic smooth muscle cells (HAoSMCs) from different donors (cat# C-007-5C, Lot# 2164581, 27-year-old, Male;cat# CC-2571, Lot# 0000369150, 43-year-old, Male;cat# CC-2571, Lot# 0000335663, 22-year-old, Male;cat# 354-05a, Lot# 1596, 53-yearold, Male) were purchased from Cell Applications (San Diego, CA), and Lonza (Walkersville, MD).HAoSMCs were cultured using 231 smooth muscle cell culture medium (Gibco, Carlsbad, CA) containing recommended cell growth supplements and antibiotics.Monolayers of all vascular cell were maintained in a humidified incubator with 5% CO2 at 37 °C and used between passage five and nine for all experiments.The culture medium was changed every 72-96 h and subculturing was performed upon confluency.Confluent flasks of vascular cells were, trypsinised using 1X trypsin (Gibco, Life technologies Limited), and seeded to flask or plates for subculturing and experiments, respectively.For experiments, cells were seeded in 6 well (1.5 × 10 5 cells/well) and 12 well (7 × 10 4 cells/well) plates maintaining the culturing condition as stated above.CC treatments in HAoSMCs were performed in 231 smooth muscle cell culture media without growth supplements) for 24 h.Treatments with inhibitors were performed 1 h prior to CC treatment.Cell and crystal free supernatants (CC conditioned medium, CC CM) were collected after 24 h of CC treatment and stored at − 20 °C until use.Control conditioned medium (Control CM/CTL CM) are the cell free supernatants from untreated cells.All the inhibitors used in the study are detailed in Additional file 1: Table S1.HAoSMCs are termed as VSMCs in the manuscript.HAoSMCs are referred as VSMCs in the article. Isolation of polymorphonuclear neutrophils Peripheral blood from healthy blood donors was collected in EDTA tubes at Örebro University Hospital.Human neutrophils were prepared from buffy coats obtained at the Örebro University Hospital blood central.According to Swedish law, the use of anonymized buffy coats does not require specific ethical approval.Human neutrophils were isolated from whole blood using Polymorphoprep (Axis-Shield PoC AS, Oslo, Norway) and Lymphoprep (Axis-Shield PoC AS, Oslo, Norway) by density gradient centrifugation, followed by neutrophils collection in the PBS.Erythrocyte were lysed by hypotonic shock and neutrophils were resuspended in the desired medium as per to the treatment.The viability of the neutrophils was > 90% as determined by the trypan blue exclusion test. LDH assay Cell viability was evaluated by Pierce Lactate dehydrogenase (LDH) cytotoxicity assay in the supernatants according to the manufacturer's instructions.The absorbance was read using the Cytation 3 plate reader (BioTek, Winooski, VT, USA). Flow cytometry HAoSMCs, untreated or treated with CC and inhibitors as described above were harvested and resuspended in FACS buffer (PBS/0.1% FBS/1 mM EDTA).Cells were analyzed on a Gallios flow cyotmeter and data were analyzed using Kaluza software (Beckman Coulter).The gating was set according to shift in side scatter (SSC) from untreated control cells and the quantification of CC uptake is the percentage of cells with high granularity shift indicated by a shift in side scatter (SSC) high, also decribed elsewhere (Donat et al. 2019).Flow cytometry gating strategies are shown in Additional file 2: Figure S2-4.The cells were labelled using 7AAD (Biolegend) to assess the viability.The cells negative for 7AAD staining were considered as live cells.CC uptake in neutrophils was identified by gating for human CD66b [Bv421 PE-Cy7 Mouse Anti-Human CD66b (clone G10F5, Biolegend, Cat#305116)] positive viable cells and the shift in granularity.List of inhibitors used for screening of pathways are displayed in Additional file 1: Table S1. Localization of CC in VSMCs using confocal microscopy VSMCs at density of 10 5 cells/well were cultured and treated with 0.5 mg/ml of CC in fibronectin-coated (10 ug/ml, 2 h; R and D systems, UK) eight-chamber culture slides for 24 h.The cells were then fixed using ice-cold 4% paraformaldehyde for 30 min at room temperature, followed by incubation with ice-cold PBS containing 0.1% Triton-X100 for 10 min.The cells were washed with ice-cold PBS and blocked in 1% BSA in PBS containing 0.1% Triton-X100 for 30 min.F-actin was visualized by staining with Rhodamine phalloidin, 400X (Invitrogen, Stockholm, Sweden) for 20 min in dark followed by staining the nucleus using 4ʹ,6-Diamidino-2-Phenylindole hydrochloride, (DAPI, Sigma-Aldrich, Germany) for 5 min in dark and washed twice with PBS.Slides were airdried and mounted using antifade reagent and were stored in the dark at 4 °C until viewed under microscope.CC were visualized under polarized light and Images were acquired at 20X and 40X magnification and 1024 × 1024 resolution using confocal laser scanning microscope SP8 (Leica, Germany).The images were further processed with LAS X software (Leica, Wetzlar, Germany). Detection of neutrophil extracellular traps (NETs) formation using confocal microscopy Neutrophils were treated with and without 0.5 mg/ml of CC, phorbol 12-myristate 13-acetate (PMA, 100 nM), and conditioned medium in eight-chamber culture slides for 3 h.Phorbol 12-myristate 13-acetate (PMA) (10 nM/3 h) treatment was used as positive control for NET formation.Cells were washed gently and fixed with 4%paraformaldehyde for 30 min at room temperature, followed by incubation with ice-cold PBS containing 0.1% Triton-X100 for 10 min.The cells were blocked in 1% BSA in PBS containing 0.1% Triton-X100 for 30 min and stained for F actin using Rhodamine phalloidin 400X, for 20 min in the dark.Nucleus was stained using 2.5 uM of Sytox ™ green DNA stain for 5 min in dark.Slides were mounted using antifade fluorescence reagent and kept in dark at 4 °C until viewed under microscope. To quantify NETs, images were processed using ImageJ/FIJI software.The NET-forming population was identified using stringent parameters, including differential Sytox average size, intensity, and shape deviation.Sytox Green images were converted to binary format, and image stacks were generated.These images were then converted to 8-bit grayscale, and a threshold (Triangle threshold) was applied to the stacked images.Particle analysis was performed using a Region of Interest (ROI) manager, with ROIs defined by size (micron 2 ): 68 to infinity (in pixel units) and circularity from 0.00 to 1.00.The average size and number of particles (ROIs) were evaluated.NET surface-based threshold of > 68 µm 2 was used as described elsewhere (van der Linden et al. 2017).Relative Fluorescence Unit (RLU) was determined by calculating the Corrected Total Nucleus Fluorescence (CTNF) of NETs, which is the integrated fluorescence intensity (nuclear area of each cell multiplied by the mean fluorescence of the selected area), using ImageJ/ FIJI software. Enzyme linked immunosorbent assay Enzyme Linked immunosorbent Assay was performed to quantify CC dependent IL-33 release in the supernatants of VSMCs using DuoSet ® ELISA kit (R&D Systems, Minneapolis,USA) according to manufacturer's instructions.The absorbance was read at 450 nm using the Cytation 3 plate reader (BioTek, Winooski, VT, USA). Neutrophil labelling and neutrophil adhesion assay Neutrophils were labeled with calcein AM (Molecular Probes, Eugene, OR).In brief, neutrophils were labelled by incubating 5 × 10 6 /ml cells with 50 mg of calcein AM for 30 min at 37 °C in 18 ml of FACs buffer.Cells were then washed twice with PBS at 23 °C and resuspended in the complete VascuLife medium. Endothelial cells were cultured in 96 well plates and incubated for 48 h with different treatments including conditioned medium (ratio 8:2) and IL-33 positive control.Isolated neutrophils were labelled with calecin AM, 3 ug/ml (#2049068, invitrogen) for 30 min.Endothelial cells were washed and incubated with labelled neutrophils for 30-40 min.Endothelial cells were washed and the fluorescence was measured. Olink proteomics Cell lysates and culture medium from HAoSMCs treated with and without CC were analyzed utilizing Inflammation panels, Cardiovascular III (CVDIII) and Cardiometabolic panels.Additional information about Olink Proseek Multiplex Assay and Gene ontology analysis can be found elsewhere (Paramel et al. 2020). Cytokine screening using Bio-Plex 200 system Human PMA were treated with CC (0.5 mg/ml) and with conditioned medium from CC treated VSMCs respectively for 24 h in 231 smooth muscle cell culture media with supplements.The supernatants were pooled from treatment on PMA from 3 different donors for the cytokine screening using Bio-Plex Pro Human Cytokine 48-Plex Screening Panel.Bio-Plex Manager version 6.1.1Build 794 were used to analyze the results. Phagocytosis assay Uptake of E. coli Bioparticles Conjugated with pHrodo ™ Red and CC were used to quantify phagocytosis in neutrophils using flowcytometry.Cells (1 × 10 5 cells/100 ul) were incubated in different treatment conditions with pHrodo ™ Red conjugated E. coli Bioparticles and CC for 1 h and 3 h.Cells were analyzed on Gallios flow cytometer and data were analyzed using Kaluza software (Beckman Coulter).Flow cytometry gating strategy is shown in Additional file 2: Figure S3, S4. Reactive oxygen species (ROS) measurement Total ROS production from neutrophils were measured using a luminol-horseradish peroxidase (HRP) assay.The conditioned medium was pre-incubated with control IgG1 or 1 ug/ml of Anti-hIL-33-IgG prior to the addition of 100 ng/ml of recombinant IL-33 for 30 min.The neutrophils (10 6 ) were added and incubated with luminol (0.1 mg/ml, Sigma) and HRP (4 U/ml, Roche).Luminescence was measured every third min for 6 h, as previously described (Demirel et al. 2020). Statistical analysis Statistical analysis of the protein multiplex data was done using GraphPad Prism (version 9; GraphPad Software, San Diego, CA, USA; https:// www.graph pad.com/ scien tific-softw are/ prism/).Statistically significant differences between more than two treatment groups were assessed using one-way analysis of variance (ANOVA) with Bonferroni post hoc corrections (for normally distributed data).All results are represented as mean ± SD.For comparison between two treatment groups students t test, Wilcoxson signed rank test and Mann-Whitney were used (for data not normally distributed) and results were represented as mean ± SD or median.p-values ≤ 0.05 was considered statistically significant. Ethics statement Informed written consent was obtained from all participants.The use of human atherosclerotic lesions was ethically approved by Uppsala Regional Ethical Board (Dnr 2015/532).The study was ethically performed as per to the guidelines of Helsinki Declaration. VSMCs and macrophages were localized around CC clefts in human carotid plaques CC are known to be found in human atherosclerotic lesions.To further investigate the interaction of CC with vascular cells, we examined the localization of macrophages and smooth muscles in CC rich areas of human atherosclerotic lesions.CC clefts were found in the necrotic core.Sections of human atherosclerotic lesions showed SMA and CD68 positive cells in proximity of "CC clefts" in necrotic core (Fig. 1A).Localization of SMA and CD68 positive cells around the CC clefts indicates the possible interaction of CC to smooth muscle cells and macrophages. Phosphoinositide 3-kinase (PI3K) inhibition reduces CC uptake in VSMCs We next determined CC uptake in primary VSMCs from different donors.Internalization of CC was observed using confocal microscopy.Representative images of VSMCs treated with CC are presented in (Fig. 1B).The Z stacking of the acquired images shows nucleus (DAPI), F actin (Rhodamin phalloidin) and CC (polarized light) in the same plane (Fig. 1C), confirming internalization CC in VSMCs. A dose-dependent increase in CC uptake by VSMCs was observed using flow cytometry (Fig. 1D).Around 50-60% of VSMCs exhibited saturated cholesterol uptake at a CC concentration of 0.5 mg/ml.A moderate dose dependent increase in the cytotoxicity was observed in response to CC (Additional file 2: Figure S1).CC at a concentration of 0.5 mg/ml was preferred for further experiments. We further investigated the signaling and endocytosis pathways involved in CC uptake in human VSMCs and found that inhibition of PI3K protein and its downstream protein, protein kinase B (AKT) showed significant reduction of CC uptake in VSMCs (Fig. 2A).Inhibition of PI3K signaling using wortmannin significantly reduced P85 of PI3K subunits, however the subunits P110α, P110β, and P110gama remain unaltered (Fig. 2B).Treatment with cytochalasin D and wortmannin reduced CC uptake in VSMCs by inhibition of F-actin and phosphatidylinositol 3′-kinase respectively (Fig. 2A, C).CC uptake was reduced to 50-60% in response to inhibitor Wortmannin and Cytochalasin D, most likely by inhibiting macropinocytosis and phagocytosis pathways.The downstream signaling targets of PI3K pathway, phospho AKT and phospho mTOR were also found reduced upon wortmannin treatment (Fig. 2B).A dose dependent reduction in CC uptake was observed with increasing concentration of wortmannin treatment (Fig. 2C).To further investigate the additional endocytic pathways involved in CC uptake, VSMCs were incubated in the presence of inhibitors of endocytosis pathways, Fillipin III, NPS-2143, and dynasore at a non-cytotoxic concentration.Fillipin III, inhibitor of caveolin-dependent endocytosis showed no effect on CC uptake in human SMC (Fig. 2D).A significant but moderate reduction in CC uptake was observed with 100 µM of dynasore, inhibitors of clathrin-mediated endocytosis in human SMCs (Fig. 2E).NPS-2143, inhibitor of calcium ion sensing receptor mediated constitutive macropinocytosis showed significant reduction in CC uptake at 10 µM concentration (Fig. 2F).Taken together, we show that PI3K facilitates CC uptake in VSMCs by macropinocytosis and phagocytosis. VSMCs in response to CC alters inflammatory proteins We next determined the response of VSMCs to CC by analyzing protein secretion in the cell lysate and conditioned medium.VSMCs were cultured with 0.5 mg/ml of CC for 24 h and the conditioned medium and cell lysate exposed to CC were compared to conditioned medium and cell lysate from untreated controls.Cytokine profiles in the secretome were determined using three different Olink multiplex protein panels of 92 proteins (inflammation panel, CVDII panel and cardiometabolic panel). The significantly altered proteins from the proteomics data was further analyzed using the database STRING to generate interaction network.The gene ontology analysis identified protein-protein interaction network comprising of proteins involved in the regulation of leukocyte migration, low-density lipoprotein particle receptor activity, neutrophil activation, cytokine signaling in immune system, external side of plasma membrane, PI3K-AkT signaling pathway (Fig. 3G). Conditioned medium from CC treated VSMCs induce ROS production in neutrophils and NETs formation Next, we investigated the impact of CC and condition medium generated from VSMCs on neutrophils.First, we measured the viability of neutrophils in response to CC and conditioned medium at 4 h and 24 h.Viability of neutrophils remain unaltered between CC treatment and untreated control at 4 h and 24 h (Fig. 4A, B).However, the conditioned medium from CC treated VSMCs sustained the viability of neutrophils compared to conditioned medium from untreated control at 4 h and 24 h (Fig. 4A, B). Since neutrophils are known to respond to particulate substrates either by phagocytic clearance or NETs formation, we further assessed the phagocytic activity by quantifying uptake of pHrodo ™ Red and CC at 1 h and 4 h.No significant difference was observed in the phagocytosis of pHrodo ™ Red and CC at 1 h and 4 h in neutrophils in response to conditioned medium from CC treated VSMCs when compared to conditioned medium from untreated controls (Fig. 4C, D).However, phagocytosis of CC was significantly induced in response to conditioned medium, although no difference was found between conditioned medium from CC treated VSMCs compared to conditioned medium from untreated controls (Fig. 4C, D).The phagocytosis of pHrodo ™ Red E. Coli bioparticles was significantly reduced by cytochalasin D (Fig. 4C).We next determined the total ROS production by neutrophils in response to conditioned medium from VSMCs pretreated with and without CC for 90 min.We found that conditioned medium from CC treated VSMCs induced neutrophil ROS production when compared to the conditioned medium from untreated controls (Fig. 4E).No significant differences were observed in neutrophil ROS production between neutrophils cultured in VSMCs medium (control) and conditioned medium from untreated VSMC.Neutrophils in response to CC showed a significant increase in the ROS production (Fig. 4E).Thus, the data suggest that conditioned medium from CC treated VSMCs induces ROS production and sustains survival of neutrophils.However, conditioned medium from CC treated VSMCs has limited influence on the phagocytic property of neutrophils. Furthermore, we investigated whether conditioned medium from CC treated VSMCs promote NETs formation.We found that conditioned medium from CC treated VSMCs induced NETs formation after 3 h of treatment (Fig. 4F).NET formation was also induced in response to CC alone.Altogether, these data shows that cytokine and growth factors produced by VSMCs in response to CC promote ROS production, neutrophil survival, and NETs formation.Furthermore, the generation of NETs and inability to phagocytose can possibly impart an inflammatory milieu in the atherosclerotic microenvironment. IL-33 and conditioned medium from CC treated VSMCs promotes neutrophil adhesion Given that IL-33, an alarming cytokine that attracts immune cells in plaque was found upregulated in human atherosclerotic lesions (Stankovic et al. 2019).We further reconfirmed the proteomics showing significant induction of IL-33 release in response to CC in VSMCs using ELISA (Fig. 5A).IL-33 protein showed the highest fold change among the significantly altered proteins in response to CC in VSMCs.The significant increase in the IL-33 release in response to CC was inhibited by the increasing concentration of wortmannin (Fig. 5B), suggesting CC uptake induces IL-33 release in VSMCs which may further promote infiltration of immune cells.We further explored the expression of IL-33 in human atherosclerotic lesions (n = 38) using single cells RNA (scRNA) sequencing dataset from human carotid atherosclerotic plaques from Slenders and coworkers (Slenders et al. 2022) via the PlaqView website.Both IL-33 and its receptor IL1RL1 were abundantly expressed by SMCs and endothelial cells (Fig. 5C).Moderate levels of IL-33 and IL1RL1 expression were observed in T cells and macrophages (Fig. 5C).We further investigated the effect of IL-33 and conditioned media from VSMCs pretreated with CC on neutrophil adhesion.The endothelial cells were treated with IL-33 alone, conditioned medium from VSMC pretreated with and without CC for 48 h.When neutrophils were incubated for 30 min, IL-33 and conditioned medium from CC treated VSMCs showed a significant increase in neutrophil adhesion compared to untreated controls (Fig. 5D, E). Thus, these data suggests that IL-33 release upon CC uptake in VSMCs can promote neutrophil adhesion and possibly exacerbate CC-induced inflammatory response in the atherosclerotic microenvironment. Neutralization of IL-33 in the conditioned medium reduce NETs formation and neutrophil ROS production Next, we investigated the influence of IL-33 in the conditioned medium from CC treated VSMCs to induce NETs formation and neutrophil ROS production.We show that neutrophil treated with 100 ng/ml of recombinant IL-33 induced NETs formation.Pre-treatment with 1 ug/ml of Anti-hIL-33-IgG or DPI, a ROS inhibitor prior to the treatment with IL-33 showed a reduction in the NETs formation (Fig. 6A).Furthermore, pre-treatment with 1 ug/ml of Anti-hIL-33-IgG or DPI in the conditioned medium from CC treated VSMCs prior to incubation with neutrophils showed a reduction in the NETs formation compared to the conditioned medium from CC treated VSMCs with control IgG1 pre-treatment (Fig. 6B).Nets with size > 68 µm 2 was not observed in Furthermore, we also measured the total ROS production in response to recombinant IL-33.IL-33 stimulation significantly induced the ROS production in neutrophils compared to untreated control, which was significantly inhibited by the pre-treatment of 1 ug/ml of Anti-hIL-33-IgG with recombinant IL-33 (Fig. 6E).Pretreatment of conditioned medium from untreated control with Anti-hIL-33-IgG did not alter the ROS production compared to conditioned medium from untreated control with control IgG1 (Fig. 6F).However, Pretreatment of conditioned medium from CC treated VSMCs with Anti-hIL-33-IgG significantly reduced the ROS production compared to the conditioned medium from CC treated VSMCs with control IgG1 (Fig. 6G).This indicates that IL-33 induction by CC in VSMCs play a crucial role in augmenting ROS production and NETs formation in neutrophils, thereby contributing to the inflammatory milieu in human atherosclerotic lesion. Discussion In the present study, we propose a molecular mechanism that integrates VSMCs and neutrophils with CC mediated immune response that is most likely to take place in human atherosclerotic lesions.We show that VSMCs and macrophages are localized around cholesterol clefts suggesting VSMCs might contribute to the CC driven inflammatory response.We also show that VSMCs can phagocytose CC via PI3K pathways and VSMCs upon CC uptake induce the release of inflammatory mediators that promote neutrophils adhesion, ROS production and NETs formation.The present study also demonstrates that IL-33 release upon CC uptake in VSMCs is crucial to promote neutrophil mediated inflammatory response. Whereas much focus on CC induced response has been attributed towards macrophages, data on CC induced inflammatory response in the VSMCs are scarce.Herein we show that in addition to macrophages, VSMCs were found colocalized with cholesterol clefts in human atherosclerotic lesion.Earlier studies from Ho-Tin-Noé et al. has shown that CC are localized in the atheromatous core, fibrous cap and its interface with the atheromatous core in human plaque (Ho-Tin-Noé et al. 2017).Our observations resemble those of Ho-Tin-Noé et al. who showed localization of CC in the vicinity of SMCs or arising from SMC residues (Ho-Tin-Noé et al. 2017).Using primary cultures of human VSMCs, we further found that VSMCs can phagocytose CC via PI3K-AkT pathway.Although, previous studies have shown independently the phagocytic activity of VSMCs and cholesterol crystal formation in cholesterol loaded VSMCs (Kiyak 1997;Ho-Tin-Noé et al. 2017), this study demonstrate the possibility of VSMCs to uptake CC through phagocytosis in human atherosclerotic lesions.Similar to VSMCs, studies have shown the phagocytic uptake of CC in macrophages, which was abrogated by cytochalasin D (Rajamäki et al. 2010).Moreover, studies have shown that cholesterol crystal induces activation of PI3K, the pharmacological inhibition of which significantly reduced the CC induced IL-1α/β production in macrophages.This fits well with our findings showing that CC activate PI3K signaling pathway, the inhibition of which significantly reduced the CC uptake and CC induced IL-33 production in VSMCs.Hence, these results provide strong support for the hypothesis that CC mediated cellular response largely depends on the PI3K dependent CC uptake in VSMCs.These findings are consistent with Swanson and colleagues, who originally identified PI3K as a key component of phagocytosis and showed that inhibition of PI3K pathway, inhibits macropinocytosis in macrophages (Araki et al. 1996). It has been shown that VSMCs are the most abundant intimal cells in human fatty streaks and early fibrolipidic lesions.We therefore explored the role of VSMCs in CC-induced inflammation.CC have shown to induce broad spectrum of function in immune response and inflammation (Abela 2010a, b).In the current study we show that several inflammatory mediators such as 4E-BP1, uPA, TNFRSF9, MMP10, TGFa, CASP8, IL-33, CXCL10, STAMBP, CD40, Flt3L, ADA, MMP-3, IL6, IL8 and MCP-1 are markedly induced in response to CC in VSMCs.Several of these proteins play crucial role in atherosclerosis by favoring inflammation, development, and complication of the plaque (Mach et al. 1998;Silence et al. 2001;Falkenberg et al. 2002;Heller et al. 2006;Purroy et al. 2018;Soderstrom et al. 2018;Stankovic et al. 2019).Induction of IL6, IL8 and MCP-1 in response to CC in whole blood have been reported previously (Samstad et al. 2014).In addition, LAP TGFb-1, IL20, MMP1, AP-N, LDL receptor, BLM hydrolase, CXCL16, Gal-3, EPHB4, CSTB, PCSK9, ALCAM, CTSZ, and CASP-3 were found to be downregulated in response to CC in VSMCs.Several lines of evidence suggest that the downregulation of most of these proteins accelerate atherosclerosis.Targeted disruption of CXCL16 and LDL receptor is shown to promote atherosclerosis (Aslanian and Charo 2006).Moreover, studies have shown that release of PCSK9 by VSMCs reduce LDLR levels in macrophages (Ferri et al. 2012), suggesting that our finding on reduced expression of PCSK9 in CC stimulated VSMCs might influence the uptake of cholesterol in the adjacent immune cells at the atherosclerotic lesion site.However, further studies are warranted to explore the influence of reduced levels of PCSK9 in VSMCs in relation to cholesterol crystal uptake.Also, Gal-3 is required for SMCs survival and proliferation (Haigh et al. 2022), the lack of which might influence the survivability of VSMCs.Additionally, deficiency of CASP-3 expression in VSMCs is shown to induce necrosis (Grootaert et al. 2016), thereby suggesting that cholesterol crystal possibly induces necrosis in VSMCs.Although, the combined action of altered proteins has yet to be fully characterized, the physiological action of these altered proteins might vary according to the cell type, type of lesions, and stage of atherosclerosis. Notably, CC induced IL-33 release was significantly reduced by the inhibition of CC uptake in VSMCs.Elevated expression of IL-33 and its receptor ST2 have been implicated in vulnerable atherosclerotic plaque which was associated with infiltration of inflammatory cells in the plaques.It has been demonstrated that IL-33 is expressed in cholesterol loaded endothelial cells, macrophages, and smooth muscle cells (Stankovic et al. 2019).This is partially consistent with our data that CC uptake in VSMCs induces IL-33 release.IL-33 is an alarming cytokine that belongs to IL1 cytokine family which attract immune cells and elicit proinflammatory response (Cayrol and Girard 2022).Although some studies have indicated atheroprotective role of IL-33 to reduces plaque size (McLaren et al. 2010), other studies have shown IL-33-ST2 levels are increased in patients with carotid atherosclerosis (Stankovic et al. 2019).Our study suggests that CC being an endogenous danger signal induces the expression of IL-33 to attract immune cells like neutrophils to further elevate the immune response which is consistent with studies showing IL-33 promotes infiltration of immune cells (Stankovic et al. 2019).We show that the conditioned medium from CC treated VSMCs and IL-33 promote neutrophil adhesion, suggesting possible role of IL-33 and other inflammatory mediators in the neutrophil infiltration.Also, it was shown that the extent of neutrophil infiltration is associated with pro-inflammatory state and rupture prone lesions (Ionita et al. 2010).This is consistent with our protein interaction data showing that the proteins altered in response to CC are involved in T cell, monocyte, neutrophil and macrophage chemotaxis. Several lines of evidence suggests that neutrophils are present in the atherosclerotic lesion (Drechsler et al. 2010;Rotzius et al. 2010;Doring et al. 2012).Studies have also shown that neutrophils were localized in the vicinity of CC clefts in human atherosclerotic lesion, thereby providing evidence for the importance of neutrophils in human atherosclerosis (Niyonzima et al. 2020).Furthermore, we show that the viability of neutrophils remains unaltered in response to CC alone when compared to untreated control.However, neutrophils appeared to sustain viability in response to conditioned medium from CC treated VSMCs compared to conditioned medium from untreated control VSMCs, suggesting that the altered expression of several cytokines in the conditioned medium from CC treated VSMCs promoted the viability of neutrophils.We further show that the viability of neutrophils had limited effect on phagocytic function of neutrophils.Although, the conditioned medium from both CC treated VSMCs and untreated control showed increased phagocytosis of CC in neutrophils when compared to CC alone, the conditioned medium from CC treated VSMCs had limited effect on CC phagocytosis when compared to conditioned medium from untreated control.CC by itself in fresh VSMCs media had limited effect on phagocytosis of CC in neutrophils.Unlike CC, increased phagocytosis of pHrodo ™ Red was observed in neutrophils in fresh VSMCs media.However, the conditioned medium from CC treated VSMCs had limited effect on pHrodo ™ Red phagocytosis when compared to conditioned medium from untreated control.This data on phagocytosis suggests that cytokines produced by VSMCs in response to CC has limited effect on neutrophil phagocytosis but augments neutrophil adhesion and viability.Further studies are needed to explore the conditions required for the CC phagocytosis in isolated neutrophils. In addition to neutrophil adhesion, we found that conditioned medium from CC treated VSMCs promote neutrophils ROS production and NETs formation.Previous studies have shown that CC are localized in cholesterol rich areas of aortic root in apolipoprotein E (ApoE)-deficient mice and induce NETs formation invitro (Warnatsch et al. 2015).We also show that CC induces NETs formation.Interestingly, we found that IL-33 as the major contributor of neutrophil ROS production and NETs formation.We show that IL-33 neutralization in conditioned medium from CC treated VSMC inhibited neutrophil ROS production and NETs formation.We further show that inhibition of ROS in neutrophil reduced the NETs formation.A recent study has suggested that IL-33 trigger NETs formation by binding to IL-33R on neutrophils and by upregulating CD16 expression.We provide additional mechanism for IL-33 mediated NETs formation via induction of ROS production in neutrophils, which is known to be an important pre-requisite for NET formation (Fuchs et al. 2007;Hakkim et al. 2011;Tembhre et al. 2022).Previous studies have shown that CC trigger NETs formation which further amplifies and drives sterile inflammation in atherosclerotic plaque by priming macrophages for cytokine release, immune cell recruitment and activating T helper 17 cells.Thereby suggesting that employing cellular approach in targeting IL-33 dependent NETosis might help in resolving inflammatory burden of atherosclerotic plaque. Conclusions In conclusion, our data demonstrates a crucial interaction of VSMCs with CC and neutrophils to drive sterile inflammation in atherosclerotic microenvironment.Human atherosclerotic lesion contains VSMCs, CC and neutrophils, with VSMCs and neutrophils known to be in proximity of CC.We found that the crosstalk between VSMCs and CC impart an inflammatory milieu that promotes neutrophil mediated immune response.VSMCs in response to CC promote neutrophil viability but with limited phagocytic clearance of CC by neutrophils.Instead, the increase in IL-33 release in response to CC uptake in VSMCs modulates neutrophils to undergo ROS dependent Netosis which may further attract immune cells and amplifies the inflammation.Future studies are required to determine the impact of employing cellular approach in targeting IL-33 dependent NETosis and to address the systemic effect of such therapeutic interventions. Fig. 1 Fig. 1 Localization of VSMCs and CC in human atherosclerotic plaque.A Immunohistochemistry was performed on paraffin embedded sectioned plaques which were stained for SMA and CD68 positive cells.B Confocal microscopy was performed to visualize CC uptake in VSMCs at 8 h and 16 h (CC shown in white by polarized light, Actin in red by Rhodamine phalloidin, and nucleus in blue by DAPI staining), Scale bar, 250 μm.C Z stacking 3D image showing CC, nucleus and actin filament (Yellow arrows) in the same plane confirming the uptake.Scale bar, 100 μm, Objective 20X.D Dose dependent uptake of CC in VSMCs for 24 h.Data are representative of experiments from VSMCs of 4 donors and displayed as mean ± SD. *p < 0.05, **p < 0.01 Fig. 3 Fig. 3 Differential expression of proteins in response to CC in VSMCs (n = 4 donors) using Olink proteomics panels.Volcano plot displaying proteins differentially expressed between control and CC treated in cell culture supernatant using Inflammation panel (A) and in the lysate using CVD III (B) and Cardiometabolic (C) panels.Colors represent FDR levels (red, FDR ≤ 1%; green, FDR ≤ 5%; blue, FDR ≤ 10%; black, FDR > 10%).The labeled dots represent proteins that were differentially expressed in response to CC treatment versus control VSMCs (FDR ≤ 10%).Levels of IL6 (D), IL8 (E) and MCP-1 (F) were measured using ELISA.Data are representative of experiments from VSMCs of 4 donors and displayed as mean ± SD. * p < 0.05.G The protein-protein interaction network as analyzed by String software.The red, dark green, pink, and violet nodes represents proteins involved in T cell, monocyte, neutrophil and macrophage chemotaxis, respectively.The green, yellow, blue, purple, grey and brown nodes represents proteins involved in regulation of leukocyte migration, low-density lipoprotein particle receptor activity, neutrophil activation, cytokine signaling in immune system, external side of plasma membrane, PI3K-AkT signaling pathway.The colored lines represent the different possible association between the proteins.A red line indicates the presence of fusion evidence; a green line indicates neighborhood evidence; a blue line indicates co-occurrence evidence; a purple line indicates experimental evidence; a yellow line indicates text-mining evidence; a light blue line indicates database evidence; and a black line indicates co-expression evidence Fig. 4 Fig. 4 Neutrophil ROS production and NETs formation in response to conditioned medium from CC treated VSMCs.PMN were treated with CC, conditioned medium from unstimulated (Control CM) and CC stimulated (CC CM) VSMCs and.Viability of PMN after 4 h (A) and 24 h (B) of treatments.Phagocytosis of pHrodoTM Red after 1 h (C) and CC after 1 h (D) in response to the treatments for 1 h in PMN.ROS was measured in response to the treatments at 90 min (E).Data are representative of samples from 3 different experiments and displayed as mean ± SD * p < 0.05, ** p < 0.01, *** p < 0.001.NETs formation in unstimulated PMN, CC treated PMN, conditioned medium (CM) from unstimulated VSMCs, conditioned medium from CC treated VSMCs and PMA for 3 h (F).Scale bar 250 μm, objective 10X; Scale bar 250 μm, objective 20X; Scale bar 100 μm, objective 40X Fig. 5 Fig. 5 IL33 expression in response to CC in VSMCs.A VSMCs were treated with CC for 24 h and IL33 release was measured using ELISA.B VSMCs were treated with CC for 24 h that was preincubated with PI3K inhibitor for 1 h.IL33 release was measured using ELISA.C Uniform manifold approximation and projection (UMAP) visualization of IL33 and IL1RL1 gene expression in different cell types of carotid plaque from Slenders et al. (n = 38).D Representative image showing calecin AB labelled neutrophil adhesion on endothelial cells in response to IL33 alone, conditioned medium (CM) from VSMC treated with and without CC (0.5 mg/ml) for 24 h, Scale bar, 100 μm, objective 4X.E Quantification of neutrophil adhesion on endothelial cells in response to treatment.Data are representative of experiments from VSMCs of 4 donors and displayed as mean ± SD.* p < 0.05, *** p < 0.001 Fig. 6 Fig. 6 Blocking IL33 in the conditioned medium from VSMCs.A PMN were treated with 100 ng/ml of recombinant IL-33 that were pre-incubated with 1ug/ml of Anti-hIL-33-IgG or DPI for 3 h.B Conditioned medium from CC treated VSMCs were pre-incubated with 1ug/ml of Anti-hIL-33-IgG or DPI for 3 h.Relative fluorescence intensity of Nets (C, D).ROS production was measured in PMN in response to 100 ng/ml of recombinant IL-33 (E),conditioned medium from unstimulated (Control CM/CTL CM) (F) and CC treated (CC CM) VSMCs (G) that were pre-incubated with 1ug/ml of Anti-hIL-33-IgG for 30 min.Data are representative of samples from 3 different experiments and displayed as mean ± SD * p < 0.05, ** p < 0.01, **** p < 0.001
2024-03-24T15:04:30.519Z
2024-03-22T00:00:00.000
{ "year": 2024, "sha1": "3cd1bd6f9dfd4b8ee10935a50558bae9ff9ff7c4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d8494609b01ae84b6eaa5757e15cbcf5ad7fa680", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
214679355
pes2o/s2orc
v3-fos-license
Effect of Loading and Functionalization of Carbon Nanotube on the Performance of Blended Polysulfone/Polyethersulfone Membrane during Treatment of Wastewater Containing Phenol and Benzene In this study, a carbon nanotube (CNT)-infused blended polymer membrane was prepared and evaluated for phenol and benzene removal from petroleum industry wastewater. A 25:75 (by weight %) blended polysulfone/polyethersulfone (PSF/PES) membrane infused with CNTs was prepared and tested. The effect of functionalization of the CNTs on the quality and performance of the membrane was also investigated. The membranes were loaded with CNTs at different loadings: 0.5 wt. %, 1 wt. %, 1.5 wt. % pure CNTs (pCNTs) and 1 wt. % functionalized CNTs (fCNTs), to gain an insight into the effect of the amount of CNT on the quality and performance of the membranes. Physicochemical properties of the as-prepared membranes were obtained using scanning electron microscopy (SEM) for morphology, Raman spectroscopy for purity of the CNTs, Fourier transform infrared (FTIR) for surface chemistry, thermogravimetric analysis (TGA) for thermal stability, atomic force microscopy (AFM) for surface nature and nano-tensile analysis for the mechanical strength of the membranes. The performance of the membrane was tested with synthetic wastewater containing 20 ppm of phenol and 20 ppm of benzene using a dead-end filtration cell at a pressure ranging from 100 to 300 kPa. The results show that embedding CNTs in the blended polymer (PSF/PES) increased both the porosity and water absorption capacity of the membranes, thereby resulting in enhanced water flux up to 309 L/m2h for 1.5 wt. % pCNTs and 326 L/m2h for 1 wt. % functionalized CNT-loaded membrane. Infusing the polysulfone/polyethersulfone (PSF/PES) membrane with CNTs enhanced the thermal stability and mechanical strength. Results from AFM indicate enhanced hydrophilicity of the membranes, translating in the enhancement of anti-fouling properties of the membranes. However, the % rejection of membranes with CNTs decreased with an increase in pCNTs concentration and pressure, while it increased the membrane with fCNTs. The % rejection of benzene in the pCNTs membrane decreased with 13.5% and 7.55% in fCNT membrane while phenol decreased with 55.6% in pCNT membrane and 42.9% in the FCNT membrane. This can be attributed to poor CNT dispersion resulting in increased pore sizes observed when CNT concentration increases. Optimization of membrane synthesis might be required to enhance the separation performance of the membranes. Introduction The petroleum industry is vital to the sustainability of energy and economy of the globe and accounts for a large percentage of the world's energy consumption. However, it also poses a serious threat to the environment with the vast amount of wastewater produced daily. The wastewater needs to be treated to meet the Environmental Protection Agency regulatory standards before disposing to the environment or being reused, and membrane-based treatment could be an option. Therefore, the availability of dependable membrane materials for the treatment of produced water from petroleum production could be instrumental to developing a point-of-use membrane system for use in this industry. This introduced the use of a polymeric membrane. However, due to their limitations, various membrane modification methods, such as polymer blending, have been explored. Polymer blending is a time-and cost-effective technique used to develop materials with unique anticipated properties depending on the type of membrane needed [1]. Advantages of polymer blending include, increased toughening, extended service temperature range, improved barrier properties and flame-retardant properties. Polyethersulfone (PES) and polysulfone (PSF) are reported to be widely used for water treatment purposes due to their good membrane-forming performances, outstanding heat stability, visual transparency, excellent solubility and selectivity as compared to other polymer counterparts. They are easy to fabricate, cost-effective and widely available in commercial markets [2]. However, they are both known to be hydrophobic, resulting in membrane fouling and insufficient mechanical strength, hence their application of membranes in various industries to treat wastewater remains limited [2]. This problem can be solved by embedding nanoparticles with hydrophilic properties, such as carbon nanotubes (CNTs). Ever since the discovery of CNTs by Sumio Iijima in 1991, special attention has been centered on studying and understanding their structure, properties and use. They have strengthened the application of membrane technology, especially for water treatment purposes, due to their unique and outstanding characteristics. Some of their unique characteristics include the smoothness of their internal walls, as they result in low levels of friction with water molecules during filtration applications [3]. Recently, the application of CNT-filled membranes as oil-containing water separation materials has attracted wide-ranging attention due to their properties, which include low density, high porosity, electrical and thermal conductivity [4,5]. Other properties include tensile strength, higher elastic modulus and strain to fracture, the capability to endure twisting together with cross-sectional distortions and high compression without breakage [6]. These properties stabilize and strengthen membranes; they also enhance their separation performance. However, to increase the hydrophilicity of the membranes, CNTs need to be functionalized to improve the surface property of the membrane. Functionalization also improves the dispersion of CNTs in the polymer matrix during fabrication to obtain composites with intrinsic properties. This step is important because the more hydrophilic a membrane is the better its fouling resistance. In fact, most fouling agents are naturally hydrophobic [7]. A previous study was conducted where PSF/PES blended membranes were prepared at different compositions (0% PSF:100% PES, 100% PSF:0% PES, 20% PSF:80% PES, 25% PSF:75% PES, 50% PSF:50% PES and 80% PSF:20% PES), characterized and evaluated for the treatment of wastewater containing benzene and phenol. From these characterization and separation performance results, the best performing membrane (25% PSF:75% PES) was selected and further modified in this study [8]. In this study, five blended 25% PSF:75% PES membranes were prepared. From the five membranes, three were embedded with pure CNTs at different wt. % loading and one with functionalized CNTs. The objective of this study was to investigate the effect of CNT loading and functionalization of CNTs on the physicochemical properties, thermal stability, mechanical strength and separation performance of PES/PSF blended membrane during the treatment of phenol-containing wastewater. Preparation of Blended PSF/PES Membrane Ten percent (polymer to solvent ratio) of 25% PSF:75% PES (by weight) were dissolved in n-methyl-2-pyrrolidone (NMP) under continuous agitation at room temperature for 12 h. The casting solution was cast using a casting blade set at 180 µm on smooth glass. The glass, together with the dope, was then immersed in distilled water for the phase separation step. The formed membranes were left to dry at room temperature for 24 hours. The membranes were then characterized. Purification and Functionalization of CNTs For purification, 1 g of raw CNTs were added to 100 mL of hydrochloric acid and then ultrasonicated for 4 h at 40 • C. The centrifuged residue was then washed with distilled water again and filtered using vacuum filtration until the pH of the filtrate was neutral. The filtered CNTs were then dried at 100 • C for 24 h. For functionalization, 1 g of pure CNTs was reacted with a mixture of 75 mL of sulfuric acid (H 2 SO 4 ) and 25 mL of nitric (HNO 3 ); the mixture was ultrasonicated for 4 hours and heated at 40 • C. The same protocol for washing and drying used during purification was applied. The dried CNTs were then characterized to confirm successful functionalization using Fourier transform infrared spectroscopy (FTIR) to check the presence of functional groups. Preparation of CNT/Blended Membranes The membranes were synthesized using the phase inversion method, with pure CNTs at different concentrations of 0.5, 1.0 and 1.5 wt. % and one concentration of 1 wt. % of functionalized CNTs. For each membrane, the CNTs were dissolved in 15 mL of NMP and 10 wt. % of 25:75 PSF/PES dissolved in another 15 mL of NMP and stirred for 20 hours using a magnetic stirrer. After 20 h, the polymer solution was added to the CNT solution. The mixture was then stirred for 4 h until the CNTs had completely dissolved in the polymer mixture to form a casting solution. The casting solution was then cast using a casting blade set at 180 µm to form the polymeric nanocomposite membrane. The casted solution was then immersed in distilled water for the phase separation step, which takes a few seconds. The cast solution was left in the distilled water for 30 seconds until the membrane was formed. The formed membranes were left to dry at room temperature for 24 h to ensure the adequate loss of moisture before use. Equilibrium Water Content (EWC) and Porosity EWC is directly related to porosity. It measures the water absorption ability of a membrane by the pores. It was calculated using Equation (1), where W w is the weight of the membrane when wet while W d is the dry weight of the membrane. Membranes 2020, 10, 54 4 of 13 Porosity of the membrane was determined by the mass loss of wet membrane after drying up. It was calculated using Equation (2), where ρ is the density of water and V is the total volume of the membrane. Membrane Characterization Thermogravimetry analysis (TGA) was used to investigate the thermal degradation of the prepared membranes measurements which were carried out under nitrogen atmosphere in the temperature range from 25 to 800 • C and at a flow rate of 50 • C. A nano-tensile tester was used to analyze the mechanical strength of the membranes. Each membrane was cut into 10 × 30 mm 2 dimensions and tested. The force applied at membrane breakage and the difference between the initial length and length at the break for each sample were obtained. From these parameters, the tensile strength and Young's modulus values were calculated using Equations (3) and (4), respectively. Atomic force microscopy (AFM) was used to investigate the surface topography and roughness of the membranes. The microscope was operated in non-contact mode at a resonance frequency between 75 and 100 kHz and 1µm×1µm images were obtained. A Scanning Electron Microscope (SEM) was used to analyze the morphology of the membrane from the surface to the cross-section. The prepared samples were coated with carbon. The surface view and the cross-sectional view of the membranes were examined at magnification of 5000× at an acceleration voltage of 10 kV. Membrane Performance The separation performance of the membrane was assessed through the dead-end filtration method but firstly a standard experiment was conducted for pure water permeation to obtain the initial influx of the membrane and compare with that of wastewater separation. The flux was calculated using Equation (5), where J w represents pure water flux (L/m 2 h), V is the volume of water (L) that permeated through the membrane, A is the area of the membrane (50 m 2 ) while t stands for water permeation time (0.3 h). This separation system uses nitrogen gas to sustain the essential pressure gradient to force the feed solution into the membrane. The performance of the membrane was tested with synthetic wastewater containing 20 ppm of phenol and 20 ppm of benzene. The reason for using synthetic phenol and benzene containing wastewater is because produced water contains many contaminants. The choice of focusing on these contaminants was due to their carcinogenic nature and high concentration in produced water, therefore categorizing them as priority pollutants. The emulsion was constantly agitated and applied vertically to the membrane surface; and permeate passed through the membrane by the aid of the applied pressure. Ultimately the water that is introduced in the dead-end cell would pass through the membrane as permeate. For this experiment, each membrane was tested at different pressure readings of 100, 200 and 300 kPa. Rejection ratio was calculated using Equation (6), where C p is the concentration of permeate while C f is the concentration of the feed. Membranes 2020, 10, 54 5 of 13 Table 1 shows the EWC and porosity results. From the results, it was observed that after adding 0.5% pCNTs, both the EWC and porosity of the fabricated membranes decreased as compared to the membrane with 0% CNTs. However, the EWC and the porosity started increasing considerably with the increase in CNT loading and furthermore after adding fCNTs. Effect of CNTs on Thermal Stability TGA was used to determine the thermal stability of the nanocomposite membranes. CNT loading had a substantial effect on the thermal stability of polymeric membranes. Due to their unique structural features, CNTs possess excellent thermal properties. The thermal stability of CNTs in general ranges from 720 to 2000 • C depending on their structure [9]. Figure 1 presents the results of the TGA. The slight weight loss observed between 25 and 200 • C is due to the loss of moisture and solvent used during the fabrication of the membrane. Considering Figure 1, the 0 wt. % membrane started degrading at 480 • C. Upon adding pCNTs, with the addition of 0.5 wt. %, the thermal stability decreased, with degradation occurring between 400 • C and 420 • C. This can be attributed to certain defects in the polymer matrix. However, the thermal stability improved started increasing after the addition of 1 wt. % composition, attributable to the high level of compatibility between the CNTs and the polymer matrix. Comparable behavior in terms of thermal stability of membranes containing CNTs has been reported in the literature [10,11]. Upon the incorporation of fCNTs, the thermal stability declined. It has been reported that the functionalization of CNTs tends to reduce the thermal stability of nanocomposite, which is due to the present carboxylic groups on the CNTs that are easily broken at lower temperatures [9]. This could explain the observation when 1 wt. % of fCNTs was added. Figure 2 shows a single high peak, which is an indication of a single-phase separation taking place, thus confirming good compatibility between the blended polymers and CNTs. The shallow peak between 100 and 200 • C represents the loss of solvent (NMP) contained in the membrane, which has a boiling point of 202 • C. Effect of CNTs on Thermal Stability TGA was used to determine the thermal stability of the nanocomposite membranes. CNT loading had a substantial effect on the thermal stability of polymeric membranes. Due to their unique structural features, CNTs possess excellent thermal properties. The thermal stability of CNTs in general ranges from 720 to 2000 °C depending on their structure [9]. Figure 1 presents the results of the TGA. The slight weight loss observed between 25 and 200 °C is due to the loss of moisture and solvent used during the fabrication of the membrane. Considering Figure 1, the 0 wt. % membrane started degrading at 480 °C. Upon adding pCNTs, with the addition of 0.5 wt. %, the thermal stability decreased, with degradation occurring between 400 °C and 420 °C. This can be attributed to certain defects in the polymer matrix. However, the thermal stability improved started increasing after the addition of 1 wt. % composition, attributable to the high level of compatibility between the CNTs and the polymer matrix. Comparable behavior in terms of thermal stability of membranes containing CNTs has been reported in the literature [10,11]. Upon the incorporation of fCNTs, the thermal stability declined. It has been reported that the functionalization of CNTs tends to reduce the thermal stability of nanocomposite, which is due to the present carboxylic groups on the CNTs that are easily broken at lower temperatures [9]. This could explain the observation when 1 wt. % of fCNTs was added. Figure 2 shows a single high peak, which is an indication of a single-phase separation taking place, thus confirming good compatibility between the blended polymers and CNTs. The shallow peak between 100 and 200 °C represents the loss of solvent (NMP) contained in the membrane, which has a boiling point of 202 °C. Effect of CNTs on Membrane Tensile Strength Incorporation of CNTs has been reported to increase the life span of membranes by enhancing their robustness, making them less susceptible to break during operation [12]. This is due to the strong chemical bonds between carbon atoms found in a single graphene of CNTs. Ideally, pressing on the tip of a nanotube causes it to bend rather than breaking, and usually goes back to its original shape when the pressing force is released [13]. This improves the membrane tensile strength when incorporated in the polymer matrix. Figure 3 summarizes the results obtained from the nano-tensile tests. It is observed that the tensile strength of the membranes increased with an increase in pCNT concentration. The 1.5 wt. % pCNT membrane is the most stable in terms of mechanical strength. The tensile strength of this membrane increased with over 35% as compared to the pristine membrane, thus making it the best membrane with excellent tensile properties in this study. Using the functionalized CNTs in the PSF/PES further maximized the tensile strength of the membrane as expected. From these results, it is seen that embedding CNTs did improve the mechanical strength of the membrane in agreement with the literature [14][15][16]. The improvement in the tensile strength of the membrane could be attributed to the enhanced interaction between the polymer matrix and CNTs [17]. The same phenomenon was observed for Young's modulus. The ability of membranes to be stretched under constant pressure influences their industrial applications. As shown in Figure 4, the elasticity and strain of the membrane are dependent on the concentration of the CNTs; it increased with an increase in CNT concentration. From these results, it can be deduced that embedding CNTs has a positive impact on the mechanical properties of the membranes, confirming good interfacial interaction between the CNTs and the blended polymers. Effect of CNTs on Membrane Tensile Strength Incorporation of CNTs has been reported to increase the life span of membranes by enhancing their robustness, making them less susceptible to break during operation [12]. This is due to the strong chemical bonds between carbon atoms found in a single graphene of CNTs. Ideally, pressing on the tip of a nanotube causes it to bend rather than breaking, and usually goes back to its original shape when the pressing force is released [13]. This improves the membrane tensile strength when incorporated in the polymer matrix. Figure 3 summarizes the results obtained from the nano-tensile tests. It is observed that the tensile strength of the membranes increased with an increase in pCNT concentration. The 1.5 wt. % pCNT membrane is the most stable in terms of mechanical strength. The tensile strength of this membrane increased with over 35% as compared to the pristine membrane, thus making it the best membrane with excellent tensile properties in this study. Using the functionalized CNTs in the PSF/PES further maximized the tensile strength of the membrane as expected. From these results, it is seen that embedding CNTs did improve the mechanical strength of the membrane in agreement with the literature [14][15][16]. The improvement in the tensile strength of the membrane could be attributed to the enhanced interaction between the polymer matrix and CNTs [17]. The same phenomenon was observed for Young's modulus. The ability of membranes to be stretched under constant pressure influences their industrial applications. As shown in Figure 4, the elasticity and strain of the membrane are dependent on the concentration of the CNTs; it increased with an increase in CNT concentration. From these results, it can be deduced that embedding CNTs has a positive impact on the mechanical properties of the membranes, confirming good interfacial interaction between the CNTs and the blended polymers. Figure 3. Young's modulus for CNT/PSF/PES blended membranes. Effect of CNTs on Surface Nature of the Membrane Generally, PSF and PES are known as hydrophobic polymers that foul easily due to hydrophobic foulant molecules that are driven toward the surface [18]. AFM has become vital equipment for the membrane community when optimizing the fouling properties of membranes used for separation processes [19]. The rougher the surface of the membrane, the more susceptible it is to foul over time. This is because the rough surfaces serve as microhabitats for foulants to flourish and form a cake layer, thus reducing the integrity of the membrane [20]. Therefore, a decrease in surface roughness reduces the ability of contaminants to accumulate on the membrane. For this study, the AFM was used to analyze the effect of CNT loading in the PSF/PES membrane by looking at the peak-to-valley and surface roughness measurements of the prepared membranes. In general, imbedding CNTs alters the properties of the membrane surface, thus generating electrostatic forces between polymer chains [17]. As observed in Table 2, it is shown that the roughness of the membranes decreases with an increase in CNT loading, making the membranes smoother compared to blended polymers. This is due to the hydrophilic nature of CNTs as comparable with the study conducted by Choi [16]. The membrane with fCNT further reduced the roughness as compared to that of pCNTs; this is due to Effect of CNTs on Surface Nature of the Membrane Generally, PSF and PES are known as hydrophobic polymers that foul easily due to hydrophobic foulant molecules that are driven toward the surface [18]. AFM has become vital equipment for the membrane community when optimizing the fouling properties of membranes used for separation processes [19]. The rougher the surface of the membrane, the more susceptible it is to foul over time. This is because the rough surfaces serve as microhabitats for foulants to flourish and form a cake layer, thus reducing the integrity of the membrane [20]. Therefore, a decrease in surface roughness reduces the ability of contaminants to accumulate on the membrane. For this study, the AFM was used to analyze the effect of CNT loading in the PSF/PES membrane by looking at the peak-to-valley and surface roughness measurements of the prepared membranes. In general, imbedding CNTs alters the properties of the membrane surface, thus generating electrostatic forces between polymer chains [17]. As observed in Table 2, it is shown that the roughness of the membranes decreases with an increase in CNT loading, making the membranes smoother compared to blended polymers. This is due to the hydrophilic nature of CNTs as comparable with the study conducted by Choi [16]. The membrane with fCNT further reduced the roughness as compared to that of pCNTs; this is due to the added carboxylic acid groups on the CNTs surface that reduce the adhesion of fouling agents. Previous literature has reported that during phase inversion, fCNTs tend to travel towards the surface of the membrane, thus inducing the hydrophilic nature of the surface as expected [17]. Additionally, an increase in the hydrophilicity of a membrane provides more opportunity for water to chemically associate with the membrane surface rather than foulants [18]. These results indicate that the composite membranes embedded with CNTs possess smoother surfaces, further improving their anti-biofouling property. Membranes need to demonstrate non-adhesive properties in order to be considered as robust hydrophilic membranes and be applied in wastewater treatment processes. Increased hydrophilicity of the CNT membranes changes the surface absorption properties, thereby improving the antifouling properties of the membrane [18]. Generally, fouling agents are negatively charged, so adding negative functional groups to the CNTs increases the negative charge density of the membrane, therefore repelling the fouling agents introduced to the membrane during production [21]. The first sign of membrane fouling is decreased flux overtime. It is evident that the addition of CNT did have an effect on the surface nature of the membrane by altering the charge, hydrophilicity as well as roughness [18]. The same membrane behavior was observed by Zhang et al. [22]. These results confirm the potential of incorporating CNTs to improve the overall performance of commercial membranes by preventing fouling, thereby reducing operational costs and increasing membrane lifespan. Effect of CNT Loading on Morphology In order to evaluate the effect of CNT addition on the blended membrane, morphological analysis was conducted using SEM. Images of a cross-sectional view of the membrane were taken. The images were taken at a higher magnification of ×5000. Figure 5 represents a high magnification cross-sectional image of PSF/PES/CNTs membranes. These images show an asymmetric porous structure with CNTs tangled with the polymer. It is clearly seen that the CNTs are arranged in a disorderly manner forming tangled threads. As seen from Figure 5A, the pores are very small and are not visible from the magnification used. However, after the addition of 0.5% pCNTs in Figure 5B, a few CNT strands are visible between the macro-voids. Macro-voids were also formed and increased significantly as the CNT concentration increased. Interfacial defects between the untreated nanoparticles and the polymer solution were expected. It is observed that by increasing the CNT concentration, the agglomeration effect is increased and thus pore formation frequency is increased. These observations are in full agreement with the flux results presented in Figure 6. The membranes with pure CNTs showed multiple localized aggregation which usually happens when the CNTs are not uniformly dispersed. This happens as untreated CNTs tend to agglomerate due to van der Waals forces. Feng et al. [23] reported that incorporating inorganic nanoparticles increases pore sizes when compared to pristine membranes. nanoparticles and the polymer solution were expected. It is observed that by increasing the CNT concentration, the agglomeration effect is increased and thus pore formation frequency is increased. These observations are in full agreement with the flux results presented in Figure 6. The membranes with pure CNTs showed multiple localized aggregation which usually happens when the CNTs are not uniformly dispersed. This happens as untreated CNTs tend to agglomerate due to van der Waals forces. Feng et al. [23] reported that incorporating inorganic nanoparticles increases pore sizes when compared to pristine membranes. Effect of CNT on Pure Water Flux (PWF) and Removal of Phenol and Benzene from Wastewater The flux of the membrane depends on its pore size distribution and physicochemical properties, especially the hydrophilicity of the surface. The effect of CNT loading on CNT/PSF/PES membranes was investigated through a pure water permeation test. The water flux through the membranes was recorded and compared. As depicted in Figure 6, the membranes flux increased with an increase in pressure because of the increased driving force, which raises the capillary pressure of the membrane Effect of CNT on Pure Water Flux (PWF) and Removal of Phenol and Benzene from Wastewater The flux of the membrane depends on its pore size distribution and physicochemical properties, especially the hydrophilicity of the surface. The effect of CNT loading on CNT/PSF/PES membranes was investigated through a pure water permeation test. The water flux through the membranes was recorded and compared. As depicted in Figure 6, the membranes flux increased with an increase in pressure because of the increased driving force, which raises the capillary pressure of the membrane hence the pores [24]. CNTs are reported to improve flux due to their hydrophilic walls that are able to interact with water molecules creating a frictionless passage. It is observed that adding 0.5 wt. % pure CNTs to the blended PSF/PES showed a decline in flux as compared to the pristine membrane. The reason for this could be due to the reduced frictional free volume of the polymer matrix. It was only after adding 1 wt. % and 1.5 wt. % pure CNT that the flux was enhanced and increased dramatically as the pure CNT concentration increased reaching a maximum flux of 309 Lm-2h-1 at a pressure of 300 kPa. One membrane was prepared with functionalized CNTs to compare its performance with the pure CNT membrane. Functionalization of CNTs is generally known to influence both the chemical and physical properties of membranes, where the addition of these modified nanoparticles tends to further increase the surface pore sizes or the number of pores [25]. Mehrabadi et al. [17] reported that by functionalizing CNTs, some walls of the pore channels tend to break down, thus forming larger pores due to the accelerated phase separation during the phase inversion process. Nguyen et al. [26] argued that the increase in flux after functionalization is due to hydrogen bonding between water molecules influenced by oxygen elements from the added functional groups, thus forming a thin hydration layer on the membrane surface. This confirms why the flux of the fCNT was higher than that of all pure CNTs with a maximum flux of 326 L/m 2 h at a feed pressure of 300 kPa. The separation performance test was conducted to evaluate the effect of CNT concentration and CNT functionalization on the separation performance of the blended membrane at different pressure readings. However, the performance of these membranes is usually affected by fouling and concentration polarization. For this reason, there have been several studies focused on membrane modification to improve separation performance. These modifications are to enhance and open new avenues for the commercial application of membranes. The addition of CNTs not only influences the membrane flux, but it also has an effect on its selectivity. The effect of CNTs loading on phenol and benzene rejection was investigated during the treatment of synthetic wastewater containing 20 mg/L benzene and 20 mg/L phenol. Figure 7 show the PSF/PES membrane has a higher benzene rejection for both phenol and benzene at than the PSF/PES/CNTs membranes. Figure 8 shows % rejection of phenol, the PSF/PES membrane showed a higher phenol rejection at a lower pressure of 100 kPa, however, the % rejection decreased as the pressure increased. The reason for the low % jerection in membranes with pCNTs could be attributed to poor CNT dispersion and poor compatibility between polymer matrix and CNTs, which results in increased macro voids, creating pores larger than phenol and benzene molecules. The membrane with fCNT showed enhanced % rejection as compared to that of pCNTs for both phenol and benzene, confirming enhanced CNT dispersion as expected after functionalization. These results agree with the analysis obtained from the SEM as illustrated in Figure 5. Even at reduced feed pressure, % rejection of membranes with CNTs tends to decrease with an increase in pCNT loading, which is a result of an increasing number of pores as CNT concentration increases. This also explains the enhanced permeation flux of these membranes as illustrated in Figure 6. These results are comparable to what Phasha reported [14]. A decrease in the selectivity of phenol and benzene was observed as the feed pressure increased; this can be attributed to the increased driving force that pushes phenol and benzene molecules through the membrane pores. 5. Even at reduced feed pressure, % rejection of membranes with CNTs tends to decrease with an increase in pCNT loading, which is a result of an increasing number of pores as CNT concentration increases. This also explains the enhanced permeation flux of these membranes as illustrated in Figure 6. These results are comparable to what Phasha reported [14]. A decrease in the selectivity of phenol and benzene was observed as the feed pressure increased; this can be attributed to the increased driving force that pushes phenol and benzene molecules through the membrane pores. Conclusions PSF/PES membrane embedded with pCNTs and fCNTs were prepared successfully via the phase inversion method. These membranes were tested during the treatment of synthetic wastewater containing phenol and benzene. The addition of CNTs enhanced both the porosity and the membranes' ability to absorb water from 1 wt % CNT onwards. The flux of the membrane was 309 L/m 2 h for 1.5% pCNT membrane and 326 L/m 2 h for fCNTs membrane. However, as flux increased, % rejection decreased as compared to the pure membrane with no CNTs. This is due to the opening of pores and macro voids observed from the morphological analysis of these membranes. Though the % rejection decreased, all membranes showed acceptable benzene rejection while only two membranes (1.5% pCNT and 1% fCNT) produced the desired results for phenol rejection. The effect of feed pressure on the performance of the membrane was also evaluated. Increase in feed pressure enhanced membrane flux. However, % rejection was inversely proportional to the increasing pressure. The morphological analysis of the membranes obtained from the SEM confirmed the presence of CNTs, with a noticeable agglomeration of CNTs within the membranes. However, the functionalization of the CNTs improved the dispersion of CNTs, attributable to improved interfacial interaction between the polymer and the nanoparticles. The mechanical strength, thermal stability and surface nature of the membrane also improved drastically after adding CNTs, with optimum results obtained from the membrane with fCNTs, which was expected. The roughness of the membrane decreased with an increase in CNT content, thus enhancing the antifouling properties of the membrane. According to the results obtained, reinforcing fCNT in the blended membrane produced a desirable membrane material with good separation performance. Conclusions PSF/PES membrane embedded with pCNTs and fCNTs were prepared successfully via the phase inversion method. These membranes were tested during the treatment of synthetic wastewater containing phenol and benzene. The addition of CNTs enhanced both the porosity and the membranes' ability to absorb water from 1 wt. % CNT onwards. The flux of the membrane was 309 L/m 2 h for 1.5% pCNT membrane and 326 L/m 2 h for fCNTs membrane. However, as flux increased, % rejection decreased as compared to the pure membrane with no CNTs. This is due to the opening of pores and macro voids observed from the morphological analysis of these membranes. Though the % rejection decreased, all membranes showed acceptable benzene rejection while only two membranes (1.5% pCNT and 1% fCNT) produced the desired results for phenol rejection. The effect of feed pressure on the performance of the membrane was also evaluated. Increase in feed pressure enhanced membrane flux. However, % rejection was inversely proportional to the increasing pressure. The morphological analysis of the membranes obtained from the SEM confirmed the presence of CNTs, with a noticeable agglomeration of CNTs within the membranes. However, the functionalization of the CNTs improved the dispersion of CNTs, attributable to improved interfacial interaction between the polymer and the nanoparticles. The mechanical strength, thermal stability and surface nature of the membrane also improved drastically after adding CNTs, with optimum results obtained from the membrane with fCNTs, which was expected. The roughness of the membrane decreased with an increase in CNT content, thus enhancing the antifouling properties of the membrane. According to the results obtained, reinforcing fCNT in the blended membrane produced a desirable membrane material with good separation performance.
2020-03-26T10:37:51.770Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "1cd74cc5f6324d022e4963bfb8950d1981f22a8c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0375/10/3/54/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "111a3be3ee1296e668535b14e36d36ff5c08bb5c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
249449225
pes2o/s2orc
v3-fos-license
Multi Expression Programming Model for Strength Prediction of Fly-Ash-Treated Alkali-Contaminated Soils Rapid industrialization is leading to the pollution of underground natural soil by alkali concentration which may cause problems for the existing expansive soil in the form of producing expanding lattices. This research investigates the effect of stabilizing alkali-contaminated soil by using fly ash. The influence of alkali concentration (2 N and 4 N) and curing period (up to 28 days) on the unconfined compressive strength (UCS) of fly ash (FA)-treated (10%, 15%, and 20%) alkali-contaminated kaolin and black cotton (BC) soils was investigated. The effect of incorporating different dosages of FA (10%, 15%, and 20%) on the UCSkaolin and UCSBC soils was also studied. Sufficient laboratory test data comprising 384 data points were collected, and multi expression programming (MEP) was used to create tree-based models for yielding simple prediction equations to compute the UCSkaolin and UCSBC soils. The experimental results reflected that alkali contamination resulted in reduced UCS (36% and 46%, respectively) for the kaolin and BC soil, whereas the addition of FA resulted in a linear rise in the UCS. The optimal dosage was found to be 20%, and the increase in UCS may be attributed to the alkali-induced pozzolanic reaction and subsequent gain of the UCS due to the formation of calcium-based hydration compounds (with FA addition). Furthermore, the developed models showed reliable performance in the training and validation stages in terms of regression slopes, R, MAE, RMSE, and RSE indices. Models were also validated using parametric and sensitivity analysis which yielded comparable variation while the contribution of each input was consistent with the available literature. Introduction The natural soil-water system can be altered by the interaction of pollutants released from mining, agricultural, and industrial activities. Rapid industrialization has manifested in the form of an exponential rise in the usage of chemical agents which include hydroxides and carbonates and bicarbonates [1,2]. Among the various pollutants, the adverse effects of alkali contamination on the soil-water system have been well established by earlier researchers [3,4]. The lower concentrations of alkali can alter the structure of the soil [5], and at higher concentrations, alkali can augment the formation of new compounds in the form of zeolites and ettringite [4,6]. The alkali interactions with the expansive clays and subsequent swelling are a natural phenomenon primarily driven by the presence of smectite group clay minerals (montmorillonite, illite, and vermiculite) with expanding lattices [5,7]. However, Rao et al. [1] identified the heaving of foundations resting on non-swelling soils contributed by acid and alkali contamination. The incidence of geotechnical failures induced by alkali interaction reported by Rao et al. [1] confirmed the adverse effects of alkali contamination. Traditionally, the swelling of soil was addressed by the utilization of conventional binders such as lime and cement. However, the efficiency of traditional binders was contradicted by Hunter (1990), who reported 3.2 m heave of lime-treated soils. Further, with the growing impetus to promote sustainable binders with relatively lower carbon emissions, the usage of industrial by-products has gained momentum. Among the various industrial by-products, the efficiency of fly ash (FA) as a sustainable stabilizer/binder has been broached by recent researchers [8][9][10][11][12]. Fly ashes inherently do not exhibit unconfined compressive strength (UCS) due to lack/loss of cohesion in either dry or fully saturated conditions [13,14]. However, the UCS of FA is derived either from the difference in capillary action between the coarse and fine fractions [15][16][17] or due to internal friction [18]. Though the studies related to FA treatment on alkali-induced swelling are well documented, the attempts to quantify the variation in strength characteristics are sparse [4,19]. Hence an attempt has been made to evaluate the influence of alkali contamination on the UCS of two types of clay soils, i.e., expansive black cotton (BC) soil and non-expansive kaolin soil, for varying curing periods (1, 7, 14, and 28 days). Further, the effectiveness of FA (10%, 15%, and 20% dry weight) as a stabilizer to enhance the UCS kaolin and UCS BC soils was also evaluated for the aforementioned varying curing periods. To overcome the limitation of the laborious and time-consuming nature of laboratory studies for the evaluation of engineering characteristics, there is a growing trend of developing numerical models for solving engineering problems [20][21][22] and prediction models for swift estimation of engineering characteristics of soils [23,24]. Initially, most of the prediction models were developed on the basis of regression analysis with relatively limited databases. To overcome this limitation, the advent of artificial intelligence (AI)based models is being promoted primarily due to their ability to estimate/predict results even for larger databases. Sinha et al. [25] are among the early researchers to introduce artificial neural networks (ANNs), a machine learning language of MATLAB, for the estimation of compaction characteristics for a database of 55 soil samples. The advent of the ANNs has resulted in their consistent usage in dealing with complex physical and mathematical problems [26,27]. Along similar lines, there has been a recent advancement in genetic programming (GP)-based models in the form of gene algorithms (GAs) with the objective of identifying optimized solutions. Cramer (1985) was the first to introduce GP, which was subsequently improved by Koza [28] with varying sizes and shapes. The most advanced methods among the existing linear-based GP techniques are genetic expression programming (GEP) and multi expression programming (MEP); both are genotype computation programming methods that generate tree-like models/programs. The limitations of GAs and GP are addressed by both the methods, which manifests in their efficiency and swift execution (2 to 4 times faster) [29][30][31]. Their unique feature is their ability to learn and adapt by varying their shape, size, and composition, which is akin to living organisms [32,33]. However, unlike GEP, MEP adopts a demonstrative approach and utilizes linear chromosomes for the encoding of programs/solutions. However, the final solution (chromosome) is chosen based on the fitness value of an individual chromosome. In general, the governing parameters in MEP include subpopulation size and number, code length, function set, and crossover probability. Recently, this approach has gained greater applications in the geotechnical engineering field for addressing varied problems which include the prediction of compaction characteristics [34,35], compressive strengths [36,37], permeability and compressibility characteristics [38], deformation modulus [39], soil water characteristic curves, and peak ground acceleration [40]. However, attempts to generate models for the prediction of geotechnical characteristics of the contaminated soils are scarce. Moreover, considering the greater uncertainty associated with clayey soils, there is a need to develop a more realistic MEP model to estimate their strength characteristics. Thus, in the present study, attempts were made to develop an MEP model for the prediction of the UCS of alkali-contaminated clayey soils. For the development of the model, a comprehensive database of 384 soil samples was considered (part of the data was sourced from Ashfaq et al. [19]), and a brief description of the results and the contributing mechanism is presented in the following sections. MEP Model Development As stated earlier, the MEP approach is among the most significant linear configurations of the genetic programming (GP) series since it has the ability to deliver simplistic mathematical formulae to forecast a particular prediction model [35,41]. Therefore, the formulization of the UCS kaolin and UCS BC soil was performed in the Multi-Expression Programming X (MEPX version 2021.08.28.0-beta) by incorporating experimental records, as shown in Table 3. Sufficient laboratory test data of 384 different soils for the UCS prediction of FA-treated alkali-contaminated soils were collected by performing an experimental study [42]. The MEP genes are the substrings of varying lengths that keep the chromosomal length constant and equivalent to the total genes on each chromosome. Each gene provides instructions for making a function or a terminal sign, whereas a gene encoding a function includes addresses to the function parameters. The function arguments always have lower parameter estimates than the location of the function on that chromosome [41]. A detailed methodology for generating equations is provided here, and details of the advanced GP approach (MEP simulation) can be found elsewhere [34,35,39,[42][43][44][45]. Two-thirds of the entire data was considered for the MEP model development, whereas one-third was utilized to validate the formulated model. Table 4 shows the maximum and minimum values of the input (FA dosage, alkali concentration, and curing days) and output parameters (UCS kaolin and UCS BC ) used to perform strength prediction of FA-treated alkali-contaminated soils in the case of both training and testing data. The maximum as well as the minimum values of all the input and output characteristics have been tabulated in Table 4. Figure 1 shows the frequency histograms (i.e., the scatter of the data) of the input attributes considered in the current study. The curves are smooth and uniformly distributed, which shows a good type of data. In addition, standard deviation (SD), kurtosis, and skewness for all the parameters are given. A smaller SD shows that the parameters are near the respective average value. The kurtosis value represents the sharpness of the peak of a frequency distribution curve. It clarifies the shape of probability distribution [34]. It is pertinent to mention that the kurtosis value is only useful when used in conjunction with the SD value. It is possible that an attribute might have a high kurtosis (bad), but the overall standard deviation is low (good). A kurtosis value of ±1 is considered very good for most psychometric uses, but ±2 is also usually acceptable. The kurtosis values of FA dosage, alkali concentration, curing days, and UCS kaolin approach zero and therefore represent a mesokurtic distribution which can be seen in the histogram plot, i.e., Figure 1. However, the kurtosis value in the case of UCS kaolin is comparatively higher, which represents a leptokurtic distribution ( Figure 1). Lastly, the skewness depicts the extent to which a distribution of values deviates from symmetry around the mean. Bryne (2010) argued that data are considered to be normal if skewness is between −2 and +2. Furthermore, Table 5 shows the Pearson correlation coefficient values (represented by 'r') for the input parameters and the two output parameters, i.e., UCS kaolin and UCS BC . It can be seen that the impact of all the three input parameters is linearly increasing (because r-values are positive). In the case of both UCS kaolin and UCS BC , the order of increasing impact of parameters follows the order: FA dosage > alkali concentration > curing age. Furthermore, Table 5 shows the Pearson correlation coefficient values (represented by 'r') for the input parameters and the two output parameters, i.e., UCSkaolin and UCSBC. It can be seen that the impact of all the three input parameters is linearly increasing (because r-values are positive). In the case of both UCSkaolin and UCSBC, the order of increasing impact of parameters follows the order: FA dosage > alkali concentration > curing age. Table 5. Pearson correlation coefficient values for the input parameters and the UCS of alkali-contaminated soils. The details of 36 different trials (18 for each type of soil) undertaken to develop a model for the UCS kaolin and UCS BC with an optimal combination of hyperparameters are provided in Table 6. The set of hyperparameters (i.e., number of subpopulations, size of subpopulations, code length, tournament size, and number of generations) was varied in this study to achieve the optimal performance of models. A single parameter was modified whereas the rest were kept unchanged (as shown in Table 7) in an attempt to investigate the effect of different code settings on the correlation coefficient (R) and the mean squared error (MSE), as shown in Figures 2 and 3, respectively. In order to evaluate the UCSkaolin, the best performance was noted in the case of Trial 18 (R = 0.9465, Averaged MSE = 1245), wherein the number of subpopulations, size of subpopulation, code length, number of generations, and tournament size were kept as 20, 1000, 100, 150, and 6, respectively. On the other hand, in determining the UCSBC, the best performance was noted in the case of Trial 14 (R = 0.9672, Averaged MSE = 2220), wherein the number of subpopulations, size of subpopulation, code length, number of generations, and tournament size equal 100, 2000, 80, 150, and 2, respectively. Using the above-mentioned adjusted hyperparameter setting, the simplified mathematical expressions given for the UCSkaolin (Equation (1)) and UCSBC (Equation (2)) were obtained, via C++ code, in order to predict the targeted UCS. Using the above-mentioned adjusted hyperparameter setting, the simplified mathematical expressions given for the UCS kaolin (Equation (1)) and UCS BC (Equation (2)) were obtained, via C++ code, in order to predict the targeted UCS. Effect of Alkali Contamination The variation in the UCS kaolin and UCS BC with curing periods and alkali concentrations is presented in Figure 4. The UCS kaolin and UCS BC linearly decreased with the rise in alkali concentration, and BC soil exhibited a relatively greater fall in the UCS compared to kaolin soil. With the increase in curing periods, both the soils increased linearly for the controlled case. However, under the contaminated case, the kaolin soil exhibited a slight increase in the UCS, whereas the UCS BC remained constant at lower curing periods and was drastically reduced at higher curing periods. The variation in UCS kaolin and UCS BC in the untreated case can be attributed to their inherent mineralogical difference which enables the formation of primary hydration compounds [4]. Under a contaminated scenario, the linear decrease in UCS kaolin and UCS BC soils may be attributed to the increase in charge of clay particles with the pH of the soil. The rise in pH contributes to the subsequent dissolution of silica which varies with the size and crystallinity of quartz, a commonly found mineral in both the soils [4]. Furthermore, an increase in the UCS kaolin with the curing period may be attributed to the precipitation of hydration compounds such as nontronite and sodium silicate hydrate. Similar observations for clayey soils were made by Sivapullaiah and Reddy [4]. Effect of Alkali Contamination The variation in the UCSkaolin and UCSBC with curing periods and alkali concentrations is presented in Figure 4. The UCSkaolin and UCSBC linearly decreased with the rise in alkali concentration, and BC soil exhibited a relatively greater fall in the UCS compared to kaolin soil. With the increase in curing periods, both the soils increased linearly for the controlled case. However, under the contaminated case, the kaolin soil exhibited a slight increase in the UCS, whereas the UCSBC remained constant at lower curing periods and was drastically reduced at higher curing periods. The variation in UCSkaolin and UCSBC in the untreated case can be attributed to their inherent mineralogical difference which enables the formation of primary hydration compounds [4]. Under a contaminated scenario, the linear decrease in UCSkaolin and UCSBC soils may be attributed to the increase in charge of clay particles with the pH of the soil. The rise in pH contributes to the subsequent dissolution of silica which varies with the size and crystallinity of quartz, a commonly found mineral in both the soils [4]. Furthermore, an increase in the UCSkaolin with the curing period may be attributed to the precipitation of hydration compounds such as nontronite and sodium silicate hydrate. Similar observations for clayey soils were made by Sivapullaiah and Reddy [4]. Effect of FA Dosage and Curing Period The variation in the UCSkaolin and UCSBC with the alkali concentrations and FA dosage is presented in Figure 5. Considering brevity, the results pertaining to a 28-day curing period have been presented here. It is evident from the results that the FA addition has contributed sufficiently to the linear increase in the UCSkaolin and UCSBC for both the controlled and alkali-contaminated cases. In contrast to the contaminated case, the increase in the UCSBC is substantially higher with an increment of more than 900% compared to Effect of FA Dosage and Curing Period The variation in the UCS kaolin and UCS BC with the alkali concentrations and FA dosage is presented in Figure 5. Considering brevity, the results pertaining to a 28-day curing period have been presented here. It is evident from the results that the FA addition has contributed sufficiently to the linear increase in the UCS kaolin and UCS BC for both the controlled and alkali-contaminated cases. In contrast to the contaminated case, the increase in the UCS BC is substantially higher with an increment of more than 900% compared to the 350% increase noted for kaolin soil. The increase in UCS kaolin and UCS BC is more pronounced at higher concentrations. The linear increase in UCS of both soils is attributed to the decrease in clay content with the FA addition [2]. The greater increment at higher concentration is attributed to the greater affinity of dissolved silica (due to higher concentration of alkali) to react with calcium from FA and subsequent formation of pozzolanic compounds. The pozzolanic compounds formed not only resist alkali attack on mineral phases of soil but also offer greater resistance to compressive loading which is manifested in the form of increased UCS [4,19]. the 350% increase noted for kaolin soil. The increase in UCSkaolin and UCSBC is more pronounced at higher concentrations. The linear increase in UCS of both soils is attributed to the decrease in clay content with the FA addition [2]. The greater increment at higher concentration is attributed to the greater affinity of dissolved silica (due to higher concentration of alkali) to react with calcium from FA and subsequent formation of pozzolanic compounds. The pozzolanic compounds formed not only resist alkali attack on mineral phases of soil but also offer greater resistance to compressive loading which is manifested in the form of increased UCS [4,19]. Comparison between Experimental and Predicted Results This segment focuses on efficacy examination and relative study of the MEP models generated to compute the UCSkaolin and UCSBC, using a variety of performance indices. To evaluate the prediction efficiency and accuracy of the proposed MEP models using MEPX software, eight analytical standard indicators, namely regression line slope, correlation coefficient (R), root mean squared error (RMSE), mean absolute error (MAE), root squared error (RSE), relative root mean square error (RRMSE), Nash-Sutcliffe efficiency (NSE) and performance index (ρ) were used in this study [46,47]. These performance measures are defined by the following Equation (3) Comparison between Experimental and Predicted Results This segment focuses on efficacy examination and relative study of the MEP models generated to compute the UCS kaolin and UCS BC , using a variety of performance indices. To evaluate the prediction efficiency and accuracy of the proposed MEP models using MEPX software, eight analytical standard indicators, namely regression line slope, correlation coefficient (R), root mean squared error (RMSE), mean absolute error (MAE), root squared error (RSE), relative root mean square error (RRMSE), Nash-Sutcliffe efficiency (NSE) and performance index (ρ) were used in this study [46,47]. These performance measures are defined by the following Equation (3) to Equation (9): where E i and P i are the ith actual and predicted output values, respectively; E i and P i are the average values of the actual and predicted output values, respectively; and n is the number of samples. In addition, the objective function (OBF), as given in Equation (10), shall have a minimum value for better formulation of the model. A smaller OBF helps in overcoming the overfitting problem. A value approaching zero exhibits an excellent predictive capability. To construct accurate and robust AI-based predictive models, the ratio of experimental values and inputs in the experimental database (as shown in Table 3) must be greater than 3 and ideally greater than 5 [48]. In the current investigation, the prescribed ratio is 269/3 = 89.66 (training set) and 115/3 = 38.33 (validation set), which is significantly within safe limits and therefore depicts the robustness and superiority of the developed MEP models for the kaolin and BC treated soils. The observed (actual) and forecasted UCS kaolin and UCS BC in the training and validation phases, as well as the efficacy metrics (i.e., slope, R, RMSE, MAE, RSE, RRMSE, and ρ), are shown in Figure 6a,b, respectively. The 45 • regression line with a horizontal axis depicts the ideally fit (1:1) line having an inclination corresponding to 1 [49,50]. For good, reliable, and highly correlated models, the dispersion pattern of the data points should be closer to the diagonal line crossing the origin, with a trend line of slope approximately equaling unity, R-value greater than 0.8, and reduced error measurements (i.e., R, RMSE, MAE, RSE, RRMSE, and ρ), as shown in Figure 6 and Table 8. For both the kaolin and BC soil, the slopes of the trend lines are closer to 1 (0.90: training, 1.01: validation; 0.97: training, 0.96: validation, respectively). In addition, the R is above 0.8 (closer to 1) for both types of soils, which reflects a reasonably strong correlation between the model predicted outputs (i.e., UCS kaolin and UCS BC ) and experimental observations. Furthermore, the OBF value of kaolin soil was 0.025694481, whereas that of BC soil was 0.025050897 in the current study. Only a higher R-value is not the sole indication of the reliability and accuracy of the machine learning models [34]. Therefore, a number of error measurements were considered to validate the robustness of the developed models. These error metrics include R, RMSE, MAE, RSE, RRMSE, and ρ. The optimizer of the MEP algorithm was set to minimize the MSE while increasing the R statistic. In each model (kaolin or BC soil), the MSE and MAE were relatively low as compared to the maximum expected output, while the RSE reached zero. The optimized UCS kaolin model has MSE and MAE equaling 1245 and 19.6 and 2220 MPa and 30 for the training and validation phases, respectively. Likewise, the discussed attributes are also lower for the optimized UCS BC model. Furthermore, for both optimized models, the RSE tends to approach zero in each phase (i.e., training and validation), confirming their superior functionality. The consistent and accurate performance of the developed models is due to the structural flow of the MEP algorithm. The MEP follows the reproduction procedure to move the relevant information to the subsequent generation and uses the mutation function for optimization inside the chosen chromosomes. Only a higher R-value is not the sole indication of the reliability and accuracy of the machine learning models [34]. Therefore, a number of error measurements were considered to validate the robustness of the developed models. These error metrics include R, RMSE, MAE, RSE, RRMSE, and ρ. The optimizer of the MEP algorithm was set to minimize the MSE while increasing the R statistic. In each model (kaolin or BC soil), the MSE and MAE were relatively low as compared to the maximum expected output, while the RSE reached zero. The optimized UCSkaolin model has MSE and MAE equaling 1245 and 19.6 and 2220 MPa and 30 for the training and validation phases, respectively. Likewise, the discussed attributes are also lower for the optimized UCSBC model. Furthermore, for both optimized models, the RSE tends to approach zero in each phase (i.e., training and validation), confirming their superior functionality. The consistent and accurate performance of the developed models is due to the structural flow of the MEP algorithm. The MEP follows the reproduction procedure to move the relevant information to the subsequent generation and uses the mutation function for optimization inside the chosen chromosomes. Thus, the predefined configuration of the function is not taken into consideration [51,52]. In addition, the MEP technique produces randomized functions and selects the one that best fits the experimental results [33,40,53]. It is essential to further validate the accuracy of the developed MEP models using the values of the residual error, i.e., the difference between the model-estimated and experimental UCS [54,55]. The positive/negative minimum and maximum error obtained for the Thus, the predefined configuration of the function is not taken into consideration [51,52]. In addition, the MEP technique produces randomized functions and selects the one that best fits the experimental results [33,40,53]. It is essential to further validate the accuracy of the developed MEP models using the values of the residual error, i.e., the difference between the model-estimated and experimental UCS [54,55]. The positive/negative minimum and maximum error obtained for the UCS kaolin model (Figure 7a) are −160 kPa and 100 kPa, respectively, and are ±130 kPa for the UCS BC model (Figure 7b). The majority of the error readings run along the x-axis, indicating a significant frequency of low error values. In conjunction with significantly higher correlations and reduced error measurements, the proposed models could be advantageously employed for the prediction of UCS kaolin and UCS BC , assisting practitioners and designers to save time and skip costly laboratory tests. The plot of actual experimental values and the ultimate response of the MEP model for estimation of UCS kaolin and UCS BC can be seen in Figure 8a,b, respectively. In each case, the modeled values of the training and validation phases almost go along the observed (experimental) output, which shows the efficiency and accuracy of the formulated MEP models. the UCSBC model (Figure 7b). The majority of the error readings run along the x-axis, indicating a significant frequency of low error values. In conjunction with significantly higher correlations and reduced error measurements, the proposed models could be advantageously employed for the prediction of UCSkaolin and UCSBC, assisting practitioners and designers to save time and skip costly laboratory tests. The plot of actual experimental values and the ultimate response of the MEP model for estimation of UCSkaolin and UCSBC can be seen in Figure 8a,b, respectively. In each case, the modeled values of the training and validation phases almost go along the observed (experimental) output, which shows the efficiency and accuracy of the formulated MEP models. For the kaolin soil, the MAE and RMSE of the validation dataset are 6.31% and 7.94% lesser than the training dataset, respectively, and 9.28% and 26.66% lesser, respectively, in the case of BC soil. The improved performance in the testing stage depicts that the proposed MEP models have effectively learned the non-linear relationships among the inputs and response parameters with considerably lower error statistics and higher generalization capability [56,57]. Thus, the proposed model can be used for the prediction of UCSkaolin and UCSBC soil, which will aid in avoiding the heavy testing process. Model Validity The validity of the model is an important aspect of the AI modeling process. The model may perform better during the training stage for one set of data, whereas it may yield decreased performance for a new dataset. Therefore, the AI model shall be validated using an unused dataset to investigate the accuracy of the developed model for future applications [46,58,59]. As described in Section 3.2, the developed MEP model was validated using 30% of the experimental data; however, for further validation, the simulated For the kaolin soil, the MAE and RMSE of the validation dataset are 6.31% and 7.94% lesser than the training dataset, respectively, and 9.28% and 26.66% lesser, respectively, in the case of BC soil. The improved performance in the testing stage depicts that the proposed MEP models have effectively learned the non-linear relationships among the inputs and response parameters with considerably lower error statistics and higher generalization capability [56,57]. Thus, the proposed model can be used for the prediction of UCS kaolin and UCS BC soil, which will aid in avoiding the heavy testing process. Model Validity The validity of the model is an important aspect of the AI modeling process. The model may perform better during the training stage for one set of data, whereas it may yield decreased performance for a new dataset. Therefore, the AI model shall be validated using an unused dataset to investigate the accuracy of the developed model for future applications [46,58,59]. As described in Section 3.2, the developed MEP model was validated using 30% of the experimental data; however, for further validation, the simulated dataset was created to evaluate the effect of contributing parameters on UCS kaolin and UCS BC shown as sensitivity and parametric analysis. Sensitivity Analysis and Parametric Study of MEP Model The testing of ML-based simulations is critical to ensuring that the recommended models are trustworthy and continue to perform well over a variety of datasets. The goal of sensitivity and parametric research is to confirm the efficacy of the proposed MEP models in terms of their interdependency on physical events [60][61][62]. The sensitivity analysis (SA) of the models on the complete dataset demonstrates how sensitive a generated model is to a change in the input variable in question [57,61,63]. The SA is being used to evaluate the impact of the input factors employed in this study on the anticipated UCS of contaminated soils. For a specific independent variable (Y i ), the SA is carried out with Equations (11) and (12) for the overall experimental database considered in the current research. This means that one of the independent variables was changed between its extreme values while keeping the remaining input variables at their average values, and the output was recorded in the form of f (Y i ). Next, the second independent variable was changed and the output was monitored. R Relative Importance (%) = SA (%) = R k ∑ j=1 n R j * 100 (12) f min (Y k ) and f max (Y k ) are the minimum and maximum values of the anticipated results based on kth domain of the input variable in the preceding equations, with the remaining inputs kept at their mean. The results of SA can be observed in Figure 9, which shows that the curing period and alkali concentration have almost equal contributions to yielding UCS. The FA dosage contributes 13.37% among the three attributes. In the case of BC soil, the curing period significantly outperforms the other two variables; however, alkali concentration and fly ash dosage also contribute 22.96 and 7.73%, which is an important aspect to consider when investigating FA-incorporated alkali-contaminated soil. To begin, Figure 10 visually represents the parametric analysis of the inputs used in this work (FA dosage, alkali content, and curing duration) for the prediction of UCS of kaolin and BC soil. The UCS of kaolin soil varies linearly with the amount of fly ash and alkali content, and a second-order polynomial trend is detected for the curing duration. In the case of BC soil, a straightforward linearly rising trend is found for each input (FA dose, alkali content, and curing duration). The increase in UCS of both types of soils with the curing duration is obviously according to the physical process involved, correctly captured by the MEP models, thus validating the models in this respect. Zha et al. [64] found a significant increase in UCS up to 28 days of curing while investigating the stabilization of metal-contaminated soil by alkaline residue. A similar increasing trend in UCS with an increase in curing duration was observed by Fasihnikoutalab et al. [65]. An increase in UCS was also observed with a rise in alkali content and fly ash while investigating alkaliactivated geopolymer-incorporated kaolin soil [65]. Therefore, the developed models are deemed validated on the basis of the new experimental dataset and simulated data, which shows the model behavior similar to that of the physical process involved in alkali-activated kaolin and black cotton soil. a significant increase in UCS up to 28 days of curing while investigating the stabilization of metal-contaminated soil by alkaline residue. A similar increasing trend in UCS with an increase in curing duration was observed by Fasihnikoutalab et al. [65]. An increase in UCS was also observed with a rise in alkali content and fly ash while investigating alkaliactivated geopolymer-incorporated kaolin soil [65]. Therefore, the developed models are deemed validated on the basis of the new experimental dataset and simulated data, which shows the model behavior similar to that of the physical process involved in alkali-activated kaolin and black cotton soil. Conclusions In the present experimental-cum-modeling study, the effect of alkali contamination on the strength characteristics of two clayey soils (i.e., kaolin and BC soil) has been evaluated. The efficiency of FA in remediating the alkali-induced effects was also assessed. Finally, the results were utilized by formulating MEP-based computational prediction models for computing the UCS of both types of the soils (UCSkaolin and UCSBC) to overcome the demerits of laborious laboratory testing, cost, and time. The following conclusions can be Conclusions In the present experimental-cum-modeling study, the effect of alkali contamination on the strength characteristics of two clayey soils (i.e., kaolin and BC soil) has been evaluated. The efficiency of FA in remediating the alkali-induced effects was also assessed. Finally, the results were utilized by formulating MEP-based computational prediction models for computing the UCS of both types of the soils (UCS kaolin and UCS BC ) to overcome the demerits of laborious laboratory testing, cost, and time. The following conclusions can be drawn from the study: • The inundation of kaolin and BC soils in alkali solution caused the UCS property to decrease. The higher concentrations posed a significant impact in lowering the UCS kaolin and UCS BC . On the contrary, the FA treatment of alkali-contaminated soils resulted in a linear increase in the UCS kaolin and UCS BC , and an increase of 7-fold was witnessed for the BC soil. Hence, it is concluded that the alkali contamination acted as an activator for a subsequent pozzolanic reaction when FA was incorporated. (1) and (2)) for kaolin and BC contaminated soils can readily be used to forecast the UCS property. The equations have been generated from relatively high accuracy models evaluated using R, MAE, RMSE, and RSE (0.937, 19.6, 18.271, 0.128 and 0.956, 30, 17.151, 0.108) for the training data of kaolin and BC soils, respectively. • The generated models were evaluated using parametric and sensitivity analysis as second-level validation. The results obtained from the parametric study manifested a variation in UCS conforming to the literature for kaolin and BC soil with the change in the given input parameters. The sensitivity analysis of kaolin soil showed that curing period and alkali concentration had comparable contributions, followed by the FA dosage, whereas for BC, soil the following increasing trend was observed: curing period > alkali concentration > FA dosage.
2022-06-08T15:15:21.379Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "2fdde8f1b17a126342498adc44ec63a62e250b85", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/11/4025/pdf?version=1654518258", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c54c67bc43e0cae9497c49767f4b7a23a3b7619b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
6834272
pes2o/s2orc
v3-fos-license
Reemergence of Syphilitic Uveitis Masquerading as Other Diseases: A Report of Two Cases During a 6-month period in 2010, 2 patients with uveitis were examined at our department and diagnosed with ocular syphilis. They initially presented with symptoms and signs resembling Harada's disease and Behçet's disease and were therefore treated with systemic steroids with suboptimal responses. When laboratory workup revealed neurosyphilis, they were given a course of intravenous penicillin G, which led to significant clinical and visual improvement. Epidemiological data indicates a worldwide reemergence of syphilis and a high degree of suspicion is necessary in view of its multitude of presenting ocular signs without pathognomonic features. Introduction Although syphilis seemed to be under control after the advent of antibiotics, recent epidemiological data showed drastically increased incidence in Europe and the United States since the late 1990s and 2000s [1,2]. Worldwide, there are an estimated 12 million new cases of syphilis every year, over 90% of which occur in developing countries [3]. The Hong Kong Social Hygiene Service reported around 1,400 new cases of syphilis per year in the 1970s followed by a decline to about 300 new cases per year in the early 1990s [4]. Since 1998, the number gradually increased to over 1,000 new cases of syphilis up till 2010 [4]. Ocular syphilis is an uncommon but diagnostically important manifestation of the disease. It was reported to occur in 2.5 to 5% of patients with tertiary syphilis [5]. A multitude of presenting ocular signs has been described among both human immunodeficiency virus (HIV)-positive and HIV-negative patients. It may be the initial presentation in the eye, occurring in one or both eyes, without obvious systemic manifestations. The diagnosis of ocular syphilis is challenging because of its protean features and lack of distinguishing characteristics. During a 6-month period in 2010, 2 patients with widely disparate presenting ocular symptoms and signs were examined at our department and diagnosed with ocular syphilis. We reported the clinical findings and management of these 2 cases which illustrate the importance of a high degree of suspicion, prompt diagnosis and appropriate treatment of ocular syphilis in light of its reemergence in Hong Kong. Case 1 A 67-year-old heterosexual man presented with a 1-week history of decreased vision in both eyes. On examination, Snellen visual acuity was 6/60 and 6/18 in his right and left eye, respectively. A relative afferent papillary defect in the right eye was noted. Slit-lamp examination was unremarkable with absence of cells in the anterior chamber and anterior vitreous. Neither were there any keratic precipitates, iris nodules or posterior synechiae. On dilated fundus examination, both optic discs were hyperemic with exudative retinal detachment in the macula ( fig. 1a, b). There were no retinal exudates, hemorrhages, or signs of vasculitis, and the vitreous was clear. Systemic review elicited no significant finding either. Two days later, the patient reported a further decrease in vision bilaterally. Visual acuity was hand movement in his right eye and count fingers at 1 foot in his left eye. Intravenous fluorescein angiography showed leakage from the retinal pigment epithelial layer and optic discs bilaterally ( fig. 1c and d). Complete blood count and immune markers were unremarkable except for a raised erythrocyte sedimentation rate (ESR) of 36 mm/h. Given the acute onset of vision loss secondary to exudative retinal detachment, the hyperemic optic disc, which showed leakage on fluorescein angiograph, together with an absence of ocular trauma, the provisional diagnosis of Harada's disease (uveitic phase) was made. The patient was treated with topical and oral prednisolone 1 mg/kg/day and the visual acuity improved to 6/30 and 6/12 in his right and left eye, respectively. However, 5 weeks into steroid treatment, anterior chamber cells started to appear in his right eye. Indirect ophthalmoscopy revealed creamy-yellow retinal infiltrates along the inferotemporal arcade, hemorrhage and associated shallow subretinal exudate in the right eye ( fig. 2a, b). The initial finding of exudative retinal detachment in the macula had resolved but disc hyperemia was still present in both eyes. On fluorescein angiography there was early hyperfluorescence and late leakage associated with the inferotemporal lesion in the right eye ( fig. 2c) and bilateral disc. With the history of immunosuppression with oral steroid, the diagnosis of acute retinal necrosis was made and the patient was treated with a course of intravenous aciclovir and systemic prednisolone was gradually tapered. One week after therapy was started, the patient complained of a headache and systemic laboratory workup revealed a positive Venereal Disease Research Laboratory (VDRL) test and subsequent fluorescent treponemal antibody absorption (FTA-Abs) test was reactive. Cerebrospinal fluid analysis demonstrated 160 leukocytes per microliter, elevated protein and a positive VDRL test. The patient was treated for neurosyphilis with a 14-day course of intravenous penicillin (4 million units every 4 h). Over the next month, the patient's visual acuity improved to 6/12 in the right eye and to 6/9 in the left eye. Right-eye iridocyclitis, subretinal fluid and infiltrates together with optic disc hyperemia resolved completely. Case 2 A 30-year-old heterosexual man with good past health presented with a 2-week history of decreased vision in both eyes. He also complained of oral and genital ulcers, and bilateral maculopapular rash in the palms and forearms. He initially presented to a private ophthalmologist, who found bilateral vitritis and made the provisional diagnosis of Behçet's disease. Oral prednisolone 60 mg daily was given for 4 days prior to referral to our center for further management. Snellen visual acuity was 6/12 in the right eye and 6/18 in the left eye. Slit-lamp examination revealed quiet anterior chamber. Fundoscopic examination showed 1+ anterior vitreous cells, disc hyperemia and vitritis bilaterally. Laboratory workup was positive for VDRL and FTA-Abs tests and showed a raised ESR of 102 mm/h. HIV test was positive. Infectious disease consultation was obtained and lumbar puncture was performed. Cerebrospinal fluid analysis demonstrated 133 leukocytes per microliter, elevated protein and a positive VDRL test. When the diagnosis of neurosyphilis was conveyed to the patient, he revealed a history of unprotected sexual exposure. The patient received a 14-day course of intravenous penicillin (4 million units every 4 h) and oral steroid was tapered. His Snellen visual acuity had improved to 6/9 in both eyes upon discharge. Discussion The diagnosis of ocular syphilis is often elusive because of the protean features and lack of distinguishing clinical characteristics. Because of its ubiquitous nature, it has been coined as 'the great imitator'. In both cases reported, the diagnosis of ocular syphilis was not considered prior to laboratory tests and provisional diagnoses of Harada's disease and Behçet's disease were made. The most common presentation of ocular syphilis, reported by Anshu et al. [6] in a series of 22 consecutive patients in Singapore, was nongranulomatous anterior uveitis (81.8%), followed by vitritis (65.4%), papillitis (27.5%), scleritis/episcleritis (22.7%), interstitial keratitis (22.7%), granulomatous uveitis (13.7%), vasculitis (13.7%) and chorioretinitis (13.7%). Exudative retinal detachment has rarely been reported as the initial consequence of ocular syphilis. Jumper et al. [7] reported a series of 3 cases with exudative retinal detachment but all were presented together with focal retinitis. However, case 1 from our report presented with bilateral exudative retinal detachment without focal retinitis initially. Provisional diagnosis of Harada's disease led to the use of corticosteroids which may have caused the subsequent manifestation of focal retinitis 5 weeks after commencement of the steroids. Fu et al. [8] described a multicenter series of 8 patients who had superficial creamyyellow retinal precipitates with syphilitic retinitis regardless of sexual preference or HIV status [8]. In our first case, we also found such superficial retinal precipitates with similar optical coherence tomography (OCT) appearance ( fig. 2d) as in a report by Reddy et al. [9]. All of the reported cases had rapid resolution of these precipitates after treatment with penicillin G. It was hypothesized that these were pre-retinal collections of leukocytes, representing an exaggerated ocular response to syphilitic infection as a result of immune reconstitution [8]. This distinctive feature may help differentiate ocular syphilis from other etiologies of retinitis such as herpes and cytomegalovirus infection. Ophthalmic manifestations have been reported in all stages of syphilis. Nonetheless, the presence of optic neuritis and retinitis are generally considered as neurosyphilis and should be managed accordingly. The United States Center for Disease Control (CDC) Sexually Transmitted Diseases Treatment Guidelines recommended treatment of neurosyphilis with penicillin G (18-24 million units daily) for 10 to 14 days [8]. Increased leukocyte counts and positive VDRL test in cerebrospinal fluid analysis was highly specific but not sensitive [10]. Hence, a negative result should not eliminate the suspicion of neurosyphilis. In addition, the VDRL titer may not be proportional to the level of disease activity, rendering it ineffective for monitoring the effects of treatment. If pleocytosis in the cerebrospinal fluid was observed at presentation, monitoring was suggested at 6-month intervals until the cell count normalized [10]. The use of systemic corticosteroids as an adjunct for posterior uveitis, scleritis and optic neuritis associated with syphilis has been described [3]. In addition, it may prevent the Jarisch-Herxheimer reaction resulting from a hypersensitivity reaction to treponemal antigens that are released in large numbers as spirochetes are killed [11]. HIV co-infection was found to be common (33%) and the CDC recommended that all patients with neurosyphilis be tested for HIV [1]. Syphilis is a rare cause of uveitis, accounting for 1-5% of cases reported [12,13]. Diagnosing ocular syphilis was difficult because a multitude of presenting signs has been described without pathognomonic features. Patients may deny having a history of venereal disease exposure despite direct questioning. Awareness of the reemergence of syphilis reported in many parts of the world in the past decade as well as a high degree of clinical suspicion can allow ophthalmologists to diagnose and treat the disease early, having a reasonably good visual prognosis following treatment with antibiotics.
2014-10-01T00:00:00.000Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "d364d9e92e12d6e16e9c9b63fe36626a19005b0d", "oa_license": "CCBYNCND", "oa_url": "https://www.karger.com/Article/Pdf/331202", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d364d9e92e12d6e16e9c9b63fe36626a19005b0d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
130423050
pes2o/s2orc
v3-fos-license
A global historical Radiosondes and Tracked Balloons Archive on standard pressure levels back to the 1920 s Instruments Data Provenance & Structure Introduction The radiosonde network was practically the only upper air observing system up to the late 1970s and is still a valuable source of meteorological and climatological information, although there are now plenty of other observations such as satellite or aircraft data (Dee et al., 2011).While several global radiosonde archives exist and are publicly available, such as IGRA (Durre et al., 2006) or CHUAN (Stickler et al., 2010), they only partly fulfil the needs of climate scientists due to inhomogeneities in the data and since wind data from tracked balloons are often not available on standard pressure levels. Almost all homogenized radiosonde datasets published so far, most notably Lanzante et al. (2003), McCarthy et al. (2008), Gruber and Haimberger (2008), Haimberger et al. (2012), Allen and Sherwood (2008), have been restricted to the period 1958 onwards, since before many radiosondes where not launched at standard synoptic times and data have been provided only on significant levels, not pressure levels.For tracked balloons, the situation is even worse, since those were tracked by theodolite or radar without any information on pressure.As such the observations have been collected on altitude levels, where these altitudes often were relative to the station height and the levels were not standardized either. Even if millions of historical pilot balloon profiles were always digitally available on tape decks TD52 and TD53, a small fraction of these data (only post-1958) were used for input into NCEP/NCAR after a reworked process. There are very few upper air wind climatologies so far that go back beyond 1958, events such as the Dust Bowl, draught in the 1930s (Brönnimann and Luterbacher, 2004;Ewen et al., 2008).In a pioneering study, Brönnimann and Luterbacher (2004) used the data presented in "A historical upper air data set for the 1939-1944period" (Bronnimann, 2003) ) to characterize the climate of troposphere and stratosphere in 1940-42 related to particularly strong El Niño event.Also Grant et al. (2009) did a first look on low frequency variability and trends of upper air temperature and geopotential but not winds.The present study intends to improve the data availability by providing temperature and wind time series as far back as such data exist, but only on standard pressure levels.Data on altitude levels are interpolated to standard pressure levels using temperature information from the NOAA 20th century reanalysis (Compo et al., 2011).It is required that the time series are from ascending balloons (not kites or tethered balloons) and are at least 300 days long.As such the dataset is smaller than CHUAN (Wartenburger et al., 2014;Stickler et al., 2013) but is easier to use for time series analysis.The source data sets are described in the next section, details on the interpolation methods to standard time/pressure are given in Sect.3, data counts and some results are presented in Sect. 4. Input data Creating a uniform radiosonde dataset is challenging since there are many different data sources and digitization efforts are still ongoing around the globe, producing valuable data but in different formats.The dataset presented here can, however, draw on the results of earlier integration efforts such as IGRA (Durre et al., 2006) and on input data preparation efforts for reanalyses (Uppala et al., 2005;Dee et al., 2011).During the assimilation process, altitude level data are supplemented with pressure level information and from those it is relatively easy to get standard pressure level data.These are available for a large fraction of post-1957 data since the ERA-40 (Uppala et al., 2005) and NCEP/DOE (Kistler et al., 2001) 1948, respectively.For altitude data that have not been assimilated yet, pressure information is constructed using geopotential information from the NOAA Twentieth Century Reanalysis (NOAA 20CR) (Compo et al., 2011) as described in the next section.The detailed list of input data used is as follows: -The Comprehensive Historical Upper-Air Network (CHUAN) data set version 1.7, (Stickler et al., 2010;Wartenburger et al., 2014) and the ERA-CLIM Historical Upper-Air Data (Stickler et al., 2013).The ERA-CLIM historical upper-air dataset (acronym ECUD) contains upper air data collected and digitised within the EU 7th framework project ERA CLIM.These archives contain mainly historical upper-air data prior to the International Geophysical Year 1957.The data sets consist of 20 million balloon ascents written in around 5000 files that represent ca.2000 stations with geopotential, temperature, wind and humidity data.The first record goes back to 1900.Those data, as well as some post 1957 data, have never been actively assimilated. -The Integrated Global Radiosonde Archive (IGRA) (Durre et al., 2006), updated until 2012.IGRA contains data at standard and significant pressure levels and, sometimes, complementary altitude levels (not used in this work, since they do not add information to the pressure data).It is quite comprehensive and goes back to 1938.Being a dataset collected in the US it lacks a lot of data, however, over Europe prior to the mid-1960s. - -The NOAA 20th century reanalysis (Compo et al., 2011).It does not contain radiosonde information but its geopotential field can be used to calculate pressure information for altitude levels.Since it is available back to 1872 even the oldest upper air data can be brought to pressure levels.In this work, it has been used as reference for data time/pressure interpolation and for quality control purposes. Not only the observation input data are used from the reanalyses.ERA-40, ERA-Interim and the NOAA 20th century reanalysis all provide valuable reference fields for comparison with radiosonde data.Observation minus analysis departures (obsan) from the NOAA-20CR have been calculated for all observations used.In addition Observation minus background departures (obs-bg) have been extracted for both ERA-40 and ERA-Interim.These are integral part of the dataset prepared here and they can greatly facilitate homogenization efforts (Haimberger, 2007;Haimberger et al., 2008Haimberger et al., , 2012;;Gruber and Haimberger, 2008). Next step is now to merge all those archives to get long timeseries, spanning the whole operative time of all the available stations. Station identification The station identification procedure is crucial in order to be able to join different records coming from the same station but stored in different archives. For data assimilated in ERA-40 or ERA-Interim, this is relatively straightforward since the data must have WMO number and precise coordinates (latitude, longitude, altitude and time) in order to be assimilated.Also IGRA archives offers WMO ID numbers and coordinates for all the stations.The situation is much more difficult for CHUAN and ECUD archives: they have been delivered with metadata files with geographical coordinates (latitude, longitude, altitude, launch time), station name and, if available, WMO identification number and/or WBAN (Weather Bureau Army Navy) ID number. Only around 42 % of the stations have a WMO and 74 % have WMO and/or WBAN ID number.It has been recognized that many of the unknown stations can be marked with Full WMO ID number.Automatic methods to assign the correct WMO numbers to these stations are complex, if not impossible, for the following reasons: the station names differ in different archives and, sometimes, they are reported in local language including nonstandard ASCII characters: an automatized unification appears to be very expensive; station relocations have often split records.In many cases it is possible to join them without introducing inhomogeneities.In other case, in the same area/city different stations were operative simultaneously (PILOT and radiosonde, for instance) and even if they are close by, they should be identified as independent stations and be merged only in a second phase. For the aforementioned points, a first manual check has been performed to cross check the existing CHUAN and ECUD inventory files with the following metadata list (they can be downloaded from ftp://srvx7.img.univie.ac.at) -WMO Observing Stations and WMO Catalogue of Radiosondes; -Radiosonde comprehensive metadata catalogue (Texas University); -ERA40 radiosonde list with metadata events; -NOAA WBAN and WMO collection Since no single reference file is complete, all of them are essential to assign and validate stations WMO ID numbers.Where the four listed station inventory files are incoherent themselves (lat/lon differs more than 2 • ) and after a manual check via Google-Maps, the most recent station location has been trusted.If WMO ID number and lat/lon were perfectly matching (most of the cases) the station identification was straightforward.When the station name was the same (considering the different languages and Interpolation from altitude to standard pressure levels The PILOT balloons used for upper air wind measurements were tracked using theodolites or RADAR, both instruments report geometrical height as vertical coordinate.In both, CHUAN and ECUD, the wind observations are reported on altitude levels (meters above the sea level).An accurate interpolation from altitude to pressure (most likely not standard) and, in a second step, from not standard pressure to standard pressure levels requires either temperature plus humidity or geopotential information.Using standard atmosphere temperature values would introduce unnecessarily large errors.Geopotential information is available globally every 6 h on a 2 • × 2 • grid from the NOAA 20CR. These are interpolated bilinearly to the respective station locations (latitude/longitude). ESSDD 6, 837-874, 2013 At the station location we can now find the interpolation weight a from the formula Radiosondes and Tracked Balloons Archive where φ 1 < φ x < φ 2 where φ 1 and φ 2 are geopotential values at NOAA-20CR model levels at the station location and φ x is the reported altitude of the measurement multiplied by g.Now it is possible to determine the corresponding pressure at the station location p x : The pressure p x is, most likely not yet a standard pressure level.In order to obtain values on standard pressure levels we perform again a linear interpolation from the available pressure levels to standard levels.This procedure was necessary also for assimilated PILOT (from ERA-40 and Interim input data) since those are only available on significant levels but not standard levels. Time interpolation Not all radiosonde and PILOT stations report at 00:00 UTC and 12:00 UTC.Particularly before 1958 the launch times were not standardized.In order not to lose too much data at asynoptic times, also a time interpolation has been implemented that allows backwards continuation of many records.To take into account the diurnal cycle we assume that the difference between observation and the reference NOAA 20CR is constant within ±6 h of the observations.Thus, a simulated observation is generated by measuring the analysis departures obs-an at the time of the observation and assuming that the same departure exists at 00:00 UTC or 12:00 UTC. The observations are divided in three time categories (Table 2).These time categories are particularly important for the temperature measurements is at most 6 h, which is crude.It could, in principle, be reduced to 3 h if 4 synoptic times per day were considered instead of just 2. A cubic interpolation is considered suitable to interpolate the 20CR to the observation time t Obs . Using the departure definition: we calculate the observation at the synoptic time 00:00 UTC and 12:00 UTC: Figure 1 summarizes the idea: the first two observations at synoptic time have been not manipulated.The third one has been reported at 03:00 UTC and we would like to shift it to 00:00 UTC.For this purpose we interpolate cubically, using the 4 closest analysis data, the NOAA 20CR (it could be temperature or U or V wind component) to the observation time and we calculate the departure observation minus reference. As second step, we add the departure to the NOAA 20CR value at 00:00 UTC (in this case, t a in the picture), obtaining the reconstructed observation at the standard time 00:00 UTC.We take care that the same observation is not duplicated at 00:00 and 12:00 UTC.In order to ensure the time interpolation consistency, we compare the results with the raw radiosonde data from ERA-40.ERA-40 used "first guess at appropriate time", meaning that the background was compared to observation at the time of observation and not at the nearest synoptic time.constant 273.15 has been adopted.The U and V wind components point out the good harmony between CHUAN and ERA-40, tiny differences are admissible and originate from the different adopted methods for the conversion altitude to standard pressure levels.The largest departures are located around 300 and 150 hPa, where there are large vertical wind gradients and the NOAA 20CR reanalysis has large temperature biases in some regions that may also lead to geopotential biases. Merging the different archives The good agreement and homogeneity between the time series coming from ECUD, CHUAN, IGRA, ERA-40 and ERA-Interim suggests that it is generally safe to merge these archives in a global one, in order to get longer, more complete and usable time series. From the Figs.7 and 8 is possible to follow the development of the upper air temperature and wind networks.While systematic wind observations begin in the 1920s (data stored in the ECUD and CHUAN archives), the systematic temperature observation starts only after 1945 (CHUAN and IGRA).Few pioneering temperature observation were already performed from 1900 (The Meteorologisches Observatorium Lindenberg/Richard Assmann Observatorium station (10393 WMO), in Germany, holds the longest record, with the first ascent dated 4 April 1900). In order to merge all the stations and to ensure efficiency, the following rules have been adopted: station WMO ID number must be the same; station location (latitude, longitude and altitude) must be the same (±0.5 • ) in the only stations with more than 365 days have been considered; spike and consistency statistic tests have been performed in order to discard values have been performed erroneously. For each observed value (temperature, wind) also analysis departures from the NOAA 20CR have been calculated: departure(day, pressure, time) = Obs(day, pressure, time) − 20CR(day, pressure, time) A simple quality control has been performed on the raw data: date and time limits must be plausible (0 =: 00 < hour < 23:59, we assume 24:00 = 00:00 of the next day, Gregorian calendar); temperature between −100 and +60 • C or the equivalent in K; wind speed between 0 and 200 m s −1 ; wind direction between 0 and 360 • ; Inside those ranges, the observations may still contain very unlikely/wrong values due to many possible causes, the most likely being typos in the log books and digitization mistakes.The observation has been dropped during the merging procedure if its analysis departure is bigger than 4 times the standard deviation σ of the departures for the considered pressure level.where the NOAA-20CR suffers by strong bias respect ERA-40 and ERA-Interim, as Brönnimann et al. (2012) report.In order to avoid implausible spike flag due the above mentioned bias, the NOAA-20CR has been adjusted with the montly difference respect ERA-Interim in the year 1979 calculated on the station location (bilinear interpolation), assuming the gap significan and constant.In Fig. 5, the global temperature, U and V wind component difference ERA-Interim minus NOAA-20CR at 150 hPa, mean over 00:00 and 12:00 UTC, for the year 1979 has been plotted.Particular evident and strong is the warm bias present at high latitudes (beyond 60 • N and S), up to 12 K.Also a year cycle is visible with strong signal between October and June.Opposite situation for the U wind component where the strong difference (up to 8 m s −1 ) is concentrated in the tropical regions equally distributed during the year.The V wind component does not show strong bias. The time series viewer For simple time series visualization a Javascript-based time series viewer, available at the page http://srvx7.img.univie.ac.at/~lorenzo/DEVL_rrvis_2.0/html/, has been developed.It allows quick monitoring of the data archive which permits visual detection of outliers and shifts.One can choose between observed variables (temperature and wind speed, direction, U and V components) and departures from different background series (NOAA 20CR, ERA-Interim and ERA40).Observation time (00:00 UTC, 12:00 UTC and 00:00-12:00 UTC difference) and the pressure level (from the 16 standard pressure levels) can be selected from the self explanatory menu.Regarding the wind observations, the longest records have been collected in the USA, where in the 1920s the first upper air network was installed.The observations were performed with tracked balloons with all the difficulties and challenges of this practise by that time: manual measurements of speed and direction using theodolites did not allow to reach level higher than 400 hPa.Only with improvements in the instrumentation, it was progressively possible to reach higher levels (100 hPa were reached around 1950). The global upper air network time coverage and distribution are visible in Figs.7 and 8.In order to explore the developed of the global upper air network, it is interesting to examine, decade by decade, the number and the position of the operative stations.In Fig. 9 From 1955 the Chinese and Australian radiosonde network become fully operational. While the already existent observations are extended and reinforced (more stations with both 00:00 UTC and 12:00 UTC launches), the South American network (mainly Chile and Argentina) is set up. Almost in the same years, permanent stations turn fully functional on the Antartica coast and new weather ships are operative. In the most recent times and today, a homogeneous coverage over the Globe is reached, even if there are still not dense enough regions, as central Africa and South America.The maximum coverage was in the decade 1980-1990, afterwards there is a decrease, especially over former European colonies and the former Soviet Union.Anyway, a good spatial coverage has been maintained over whole globe, but observation scarcity is still particularly evident over central Africa and South America, key tropical regions (the stations in remote regions are particularly important especially before the satellite era, since these are the only anchors for reanalyses where no other data are available). The global radiosonde network reaches its maximum extension, in terms of number of stations, in 1957/58, during the International Geographical Year where many new stations were set and measure campains were performed in remote regions (Siberia, polar regions, central Africa and South and Central Asia), in this biennium, more than 1600 stations were operative, rough 1200 reporting wind and 900 temperature, but unfortunately a large fraction was stopped after a few months and they don't contribute actively to the merged archives since their time series are too short.After IGY, which was a crucial step, in order to expand (new stations) and standardize (unified procedure for the observation method), for the global upper air network, the observing system remained quite stable for 30 yr, with around 800 stations reporting TEMP and 1000 reporting wind, before it declines, expectially for the PILOT, over former colonized ESSDD 6, 837-874, 2013 regions.In December 2012, 825 stations were active, 713 reporting temperature and 804 wind. Radiosondes and Tracked Balloons Archive The Table 1 summarizes how the single archives contribute to the merged archive.For temperature ECUD and CHUAN data, 66.3 % and 45.1 %, respectively, of the available observations have been ingested in the merged archive.The percentages are not higher because the most recent data stored in these archives are partly overlapping with IGRA and/or ERA-40, and those have higher data priority.For wind, more than 70 % of the ECUD and CHUAN data flow into the merged archive.The new digitized (ECUD and CHUAN) data contribute rough 4.8 % (Temperature) and 10.4 % (Wind) to the merged archive. After the merging procedure, many stations now own more than 70 yr of continuous observations, which makes them extremely interesting and valuable for further studies.One should note, however, that these data are not yet homogenized.The inhomogeneities are, however, relatively easily visible if one studies the NOAA-20CR or ERA-40/ERA Interim departure time series.One unequivocal example is reported in Fig. 12: the plot shows wind direction departures from NOAA-20CR for the station 072 764 (Municipal Bismark, North Dakota USA).As expected, the departures (running mean 200 days is applied) are constant and well balanced around the null line but, a strong bias (roughly 15 • ) is visible in the period 1938-1948 (only in data coming from the CHUAN archive). Conclusions The presented merged dataset contains upper air temperature and wind records on standard pressure levels back to the 1920s.It is specifically targeted for advanced quality control and bias adjustments, and, of course, climatological analysis.It complements existing upper air datasets (ECUD, CHUAN, IGRA, ERA-40 input, ERA-Interim input) that are in total perhaps more complete (they contain altitude and/or pressure levels and also short time series with less than 300 days with observations) but also more difficult to use and not always aligned as time series.It contains not only the raw observations but also departures to the NOAA 20th century reanalysis (Merged, IGRA, CHUAN and ECUD archives) and ERA-40/ERA-Interim background forecasts.As such the dataset is particularly suitable as a basis for a homogenized temperature and wind dataset, that uses RAOBCORE technology for bias adjustments.The homogeneity adjustments and their effect on the time series and global mean trends are described in upcoming papers (Ramella-Pralungo and Haimberger, 2014;Haimberger et al., 2013). The altitude to standard pressure level conversion involved the use of NOAA 20CR geopotential information.The time resolution is relatively coarse and future surface pressure only reanalyses, such as ERA-20C (Poli et al., 2013), will help to improve on that since they have passively assimilated the upper air data and thus measure the background departures at the right time, which allows to avoid the time interpolation step.Future surface data only reanalyses also may have smaller temperature and wind biases than do the NOAA-20CR.The archive is available in convenient NetCDF format and can be visualized with a simple online plotting tool.The archive will be updated once a year shortly after a full year has been completed in ERA-Interim. The file name has been created for a easy and quick search, like: The first digit could be: -0 the station has been identified as a WMO station; -1 the station has been identified as a NO WMO station; The next 5 digits are the WMO station identification number, if the first is a 0, otherwise they are the progressive number with which the station has been saved in the respective archive (CHUAN or ECUD, since only those two archives contain unknown stations).The V refers to the Variable reported in the file and it can be: The _t refers to time series, as is the form in which the data have been stored. In the file are defined 13 dimensions, 18 variables and the global attributes.The variables list is composed by (type ⇒ name): integer ⇒ stations ⇒ Station ID, it works as the first 6 digits in the file name (according to A1) integer ⇒ index_days ⇒ progressive (from 1).Date(index_days) returns the corresponding day that refers to index_days for the selected station. float ⇒ obs ⇒ The observations array could be named, in agreement with the variable reported in the file name: The arrays have dimensions obs(obs_time, pressure_layers, index_days) In this way, we use the minimum number of days in order to map the time series. After the observed time series, the departures (background departures from ERA-Interim, ERA-40 and analysis departures from NOAA 20CR) flags and sonde type information are stored as follows: float ⇒ biascorrect ⇒ biascorrect(obs_time, pressure_layers, index_days), only available for ERA-Interim and ERA-40 archives, where biascorrections procedure has been performed by ECMWF (Haimberger and Andrae, 2011;Andrae et al., 2004); integer ⇒ sonde_type ⇒ sonde_type(index_days), contains informations for the sonde type utilized for each day with observations as suggested by WMO (see file WMO_sondetype3 , in ftp://srvx7.img.univie.ac.at integer ⇒ status ⇒ status(obs_time, pressure_layers, index_days) it contains the data source archive for the current obs_time, pressure_layers, index_days: integer ⇒ anflag anflag(obs_time, pressure_layers, index_days) flag, not used for this data type; integer ⇒ event1 event1(obs_time, pressure_layers, index_days) flag, not used for this data type; The file is equipped with global attributes: -Conventions = "CF-1.4"⇒ NetCDF files convenctions; title ⇒ "" ESSDD 6, 837-874, 2013 Full Full In the legend is reported the archive and, between brackets the reference used for the spike check ( departures bigger than 4 σ.Globally, the spikes individuate are always < 1% except for the ECUD temperature (only data prior to 1957) where at 150hPa, for Temperature, the spikes density is roughly 1%.For all the others archives and variable, the density remains below 0.5%.Only WMO stations have been used.Fig. 4. Temperature (upper panels) and U (middle panels) and V (bottom panels) wind component spike frequency (%) as function of pressure, for the whole period 1900-2010.In the legend is reported the archive and, between brackets the reference used for the spike check (departures bigger than 4σ.Globally, the spikes individuate are always < 1 % except for the ECUD temperature (only data prior to 1957) where at 150 hPa, for Temperature, the spikes density is roughly 1 %.For all the others archives and variable, the density remains below 0.5 %.Only WMO stations have been used. Radiosondes and Tracked Balloons Archive and most of them work with monthly data although daily data are available back to the 1920s.Some studies have analysed the flow fields during special climatological Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | reanalyses that went back to 1957 and Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | The ERA-40 observation input dataset.This dataset in BUFR format has lots of overlap with IGRA but contains several additional data over Europe, Japan and Antarctica that are missing in IGRA.The ERA-40 dataset starts in late 1957 and ends in 2002, however only data from 1958-1978 are used.-The ERA-Interim observation input dataset.It is equivalent with ERA-40 observation input from 1979-2012 but is available in the far more convenient ODB format.The ERA-Interim input dataset goes up to present and is preferred to ERA-40 from 1979 onward.Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | since the sensor may be affected by solar radiation bias.For temperature the time offset Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Figures 2 and 3 evidence the good agreement of the CHUAN data after the two interpolations (altitude to pressure and time) with the ERA40 data, for the year 1957-1958 over USA for temperature and U, V wind components.The mean difference CHUAN − ERA40 and the RMS are plotted against the standard pressure levels.The temperature shows exellent agreement, as expected, since the source data should be the same in CHUAN and ERA40.The constant difference of 0.05 • is likely attributable to a different conversion from • C to K. In this work, the ESSDD Discussion Paper | Discussion Paper | Discussion Paper | source archives, in order to preserve relocated station; a data priority has been set: (1) ERA-Interim, (2) ERA-40, (3) IGRA, (4) CHUAN and (5) ECUD; Discussion Paper | Discussion Paper | Discussion Paper | Figure 6 ( Moscow station, temperature at 500 hPa, green CHUAN, red ERA40) highlights the presence of spikes in the raw time series.Those erroneous values are not propagated to the merged archive.A more comprehensive spike evaluation is shown in the Fig. 4, where all the archives illustrate spikes density below 0.5 % for all pressure levels.Exception is ECUD for temperature at 150 hPa: the spike rate is 1 %.The ECUD Temperature data are prior to 1957 and coming mostly form Sibera, region ESSDD Discussion Paper | Discussion Paper | Discussion Paper | Figure 10 shows the analysis departures (i.e.Observations minus NOAA 20CR) for Moscow (26 712, Russia).More details about the viewer can be found on http: //reanalyses.org/observations/raobcorerich-visualization.Discussion Paper | Discussion Paper | Discussion Paper | the development of the upper air observing network has been displayed 2 .The first systematic wind observations are dated 1920 over the United States and they become dense from 1935 onward.In the years 1945/1950 in Europe, Russia (even if in Moscow temperature observations has been maintained since 1938) and Japan a rudimentary upper air observation network is present, but also in Australia, New 2 Only stations with more than 5 yr of Observations for the selected decate have been plotted ESSDD Discussion Paper | Discussion Paper | Discussion Paper | Zealand, the Hawaii, Polonesia and Africa few, but important, stations are working.Also stationary weatherships are operative in Atlantic and Pacific Oceans.Already in the decade 1950-1960 in the North Hemisphere the global coverage is satisfactory. Fig. 1 . Fig. 1.Time interpolation.When there are no observations available at 00:00 or 12:00 UTC but only at other times, a reference value at the time of the asynoptic observation t obs is calculated from the NOAA 20CR, employing a cubic interpolation with the 4 closest values.The difference Obs(t obs ) − 20CR(t Obs ) is assumed constant between t obs and the closest synoptic time t a .The observation at time t a is, gained by adding the departure dep(t obs ) = CHUAN(t obs ) − 20CR(t obs ) to 20CR(t a ). Discussion Paper | Discussion Paper | Discussion Paper |lues.The difference Obs(t obs ) − 20CR(t Obs ) is assumed constant between t obs and the sest synoptic time t a .The observation at time t a is, gained by adding the departure p(t obs ) = CHU AN (t obs ) − 20CR(t obs ) to 20CR(t a ).Mean (solid) and RMS (dashed) temperature difference between observations m CHUAN and ERA-40.averagedover 90 stations at 0 and 12UTC, for the years 1957-58 in North America, on standard pressure levels.The total number of observations each pressure level is also reported.Since those data arealready on pressure levels, no itude to pressure interpolation is needed.The constant vertical shift of 0.05 K is likely due different conversion from Celsius to Kelvin.In this work, we have used 0 • C = 273.15K Fig. 2 . Fig.2.Mean (solid) and RMS (dashed) temperature difference between observations from CHUAN and ERA-40 averaged over 90 stations at 00:00 and 12:00 UTC, for the years 1957-1958 in North America, on standard pressure levels.The total number of observations at each pressure level is also reported.Since those data are already on pressure levels, no altitude to pressure interpolation is needed.The constant vertical shift of 0.05 K is likely due to different conversion from Celsius to Kelvin.In this work, we have used 0 • C = 273.15K Figure 3 : Figure3: Mean (solid) and RMS (dashed) U (left panel) and V (right panel) difference between observations in CHUAN and ERA-40, averaged over 91 stations at 0 and 12UTC, for the period 1957-1958 in North America.The total number of observations at each pressure level is also reported.Differences come from different interpolation from altitude to standard pressure levels.Only WMO stations have been used. Fig. 3 . Fig.3.Mean (solid) and RMS (dashed) U (left panel) and V (right panel) difference between observations in CHUAN and ERA-40, averaged over 91 stations at 00:00 and 12:00 UTC, for the period 1957-1958 in North America.The total number of observations at each pressure level is also reported.Differences come from different interpolation from altitude to standard pressure levels.Only WMO stations have been used. Figure 4 : Figure4: Temperature (upper panels) and U (middle panels) and V (bottom panels) wind component spike frequency (%) as function of pressure, for the whole period 1900-2010.In the legend is reported the archive and, between brackets the reference used for the spike check ( departures bigger than 4 σ.Globally, the spikes individuate are always < 1% except for the ECUD temperature (only data prior to 1957) where at 150hPa, for Temperature, the spikes density is roughly 1%.For all the others archives and variable, the density remains below 0.5%.Only WMO stations have been used. Discussion Paper | Discussion Paper | Discussion Paper | Figure 5 : Figure 5: Temperature (top), U (middle) and V (bottom) Wind components: difference ERAInterim versus NOAA-20CR, at 150hPa mean over 0 and 12 UTC, for the year 1979.While V wind component shows only weak (less than 1.5 m/s) bias while T and U wind components evidence strong differences.Remarkable are the discrepancy affecting temperature expectially in the polar regions and concentrate between October and May (up to 12K).Opposite situation for the U wind component, where the difference is focused mainly in the tropical regions with amplitude in the range [-8, 8] m/s. Fig. 5 .Figure 6 : Fig. 5. Temperature (top), U (middle) and V (bottom) Wind components: difference ERAInterim vs. NOAA-20CR, at 150 hPa mean over 00:00 and 12:00 UTC, for the year 1979.While V wind component shows only weak (less than 1.5 m s −1 ) bias while T and U wind components evidence strong differences.Remarkable are the discrepancy affecting temperature expectially in the polar regions and concentrate between October and May (up to 12 K).Opposite situation for the U wind component, where the difference is focused mainly in the tropical regions with amplitude in the range [−8, 8] m s −1 . Fig. 6 . Fig. 6.Temperature observation time series of Moscow station (027612, Russia), at 500 hPa and at 00:00 h, green curve IGRA archive, red curve ERA-40 station archive.It is possible to see the two spikes on the left hand side of the plot: while no data are available in the IGRA archive, there are suspicious data in the ERA-40 archive.On the right hand side of the plot there are four suspicious values reported by the ERA-40 archive while, in the same days, the IGRA archive contains more plausible values. Figure 8 : Figure 8: Time series of number of active stations from the respective archives considered in this study.Only data from those stations are counted that have at least 365 ascents.The bottom right picture is the merged archive.Only WMO stations have been used. Fig. 8 . Fig. 8. Time series of number of active stations from the respective archives considered in this study.Only data from those stations are counted that have at least 365 ascents.The bottom right picture is the merged archive.Only WMO stations have been used. Figure 10 : Figure 10: Time series of departures between observations at Moscow station (027612, Russia) at 500 hPa and 0 UTC, and reference datasets derived from reanalysis efforts: obs-NOAA 20CR analysis (yellow); obs-ERA-40 (green) 6h background forecasts, obs-ERA-Interim 12h background forecasts (red).Even if the obs-NOAA 20CR show deeper wiggles compare to the other time series, it is still useful for detecting potential breaks in the observation time series, as can be seen from the jump in 1969 detectable in obs-NOAA20CR and obs-ERA40 departures, both. Fig. 10 . Fig. 10.Time series of departures between observations at Moscow station (027612, Russia) at 500 hPa and 00:00 UTC, and reference datasets derived from reanalysis efforts: obs-NOAA 20CR analysis (yellow); obs-ERA-40 (green) 6 h background forecasts, obs-ERA-Interim 12 h background forecasts (red).Even if the obs-NOAA 20CR show deeper wiggles compare to the other time series, it is still useful for detecting potential breaks in the observation time series, as can be seen from the jump in 1969 detectable in obs-NOAA20CR and obs-ERA40 departures, both. 8 The MERGED ARCHIVE The union of all data-sets gives a total of 3217 stations (land stations and anchored weather ships that use radiosondes, tracked balloons, with time series longer than 365 days), where 3020 have been recognized as WMO stations with valid WMO ID number.1598(1596 with WMO ID) stations contain temperature observations and 3152 (2957 with WMO ID) stations contain wind observations (as U and V components).The time series span from 1905 until today.The Meteorologisches Observatorium Lindenberg/Richard Assmann Observatorium station (10393 WMO), in Germany, has the longest record, going back to 4 April 1900, but has several gaps due to war time disruptions.The longest continuous upper air temperature record comes from Moscow with data available from 1938. Table 1 . Archives contribution to the merged archive Table 2 . Assignment strategy for observations for interpolation to nearest synoptic time.
2019-04-25T13:11:12.073Z
2013-12-23T00:00:00.000
{ "year": 2013, "sha1": "a24a0847ac142b9b3b12521858ddd5f6963d08b9", "oa_license": "CCBY", "oa_url": "https://www.earth-syst-sci-data.net/6/185/2014/essd-6-185-2014.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "9640de112f4e5e02ca01fef2e9f9db565c6ae155", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Geology" ] }
265308451
pes2o/s2orc
v3-fos-license
Improving the longevity of intravenous cannulas in sick neonates admitted to NICU in a tertiary care centre: a quality improvement project Background Neonatal intravenous cannulation, especially in preterms, is more challenging than in children or adults. Placement of an intravenous cannula is painful and many cannulas need frequent changing due to complications. Each attempt at cannulation creates an entry for skin flora to cause systemic bacteraemia. This study was undertaken at a level III NICU. The team attempted to prolong the existing cannula longevity to reduce the frequency of intravenous cannulation thereby reducing handling and pain. Objectives To improve the longevity of peripherally inserted intravenous cannula in sick neonates in NICU from the current 25.7 hours to 36 hours or more, over a span of 6 weeks. Materials and methods The quality improvement (QI) team comprised resident doctors and staff nurses. A fishbone analysis was used to identify factors that affected the longevity of intravenous cannulas. Five WHYs technique was used to identify the cause behind early cannula removal. Both techniques identified the fixation technique used at the study centre for target intervention. Plan-Do-Study-Act cycles were planned to explore different fixation techniques to improve cannula longevity. The unpaired t-test and the χ2 tests were applied to analyse statistical significance. Results We achieved significant improvement in cannula longevity from 25.7 hours to 39.6 hours just by improving the fixation technique over 6 weeks with a p=0.0006. Conclusions The QI study was successful and is adopted for routine practice. Such initiatives would greatly impact babies in low-resource settings and in transit. INTRODUCTION Neonatal intravenous cannulation, the most basic of procedures in NICU, required by almost all babies at some time during their NICU stay, is different from intravenous cannulation in children or adults.Preterm and low birthweight babies make insertion and maintaining of intravenous cannulas especially difficult.Frequent movement and smaller surface area for anchoring, lead to frequent dislodgement of inserted cannulas.This makes reinsertion unavoidable, necessary but painful procedure for neonates. Every cannulation comes with pain and increased risk of sepsis. 1 and many cannulas get removed after occurrence of complications, most commonly infiltration. 2Each attempt at cannulation creates a wound, a door for entry of skin-based bacteria, causing local cellulitis and even systemic bacteraemia 3 and neonatal sepsis-a leading cause of neonatal mortality. 4eripheral intravenous cannulas are simple, inexpensive and convenient for short durations of intravenous therapy, against central lines or peripherally inserted central catheters (PICCs) which maybe left in situ longer.Changing of cannulas remains a problem, particularly in busy public hospitals. The average life of intravenous cannulas varies across neonatal units.Studies show average longevities of intravenous cannulas between 20 and 40 hours 5 but the longevity of peripheral cannulas at the study centre WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Intravenous cannulation is a cause for pain and a potential risk of sepsis in neonates.More the number of cannulas needed, more the pain, more the risk of sepsis and more the expenditure of consumables and man-hours. WHAT THIS STUDY ADDS ⇒ This study finds that good mechanical fixation alone can significantly improve cannula longevities. Although mechanical fixation improves longevities of all cannulas, maximum impact is seen on cannulas used for clear-fluid infusions. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ The authors propose that improving mechanical fixation of cannulas would have great impact at lowresource facilities, especially in transit and referral. Open access was 25.7 hours, far below average longevity documented at other centres. 6This meant that neonates at the study centre needed changing of their cannulas daily (more frequently) and, therefore, needed more handling and endured greater pain.Improvement in cannula longevity translates to reduction in handling of neonates, fewer pricks, lesser pain, fewer complications, faster recovery and discharge, reduction in workload for overworked resident doctors and staff nurses, and reducing the hospital's expenses on recannulation.The study centre, therefore, decided to improve its cannula longevity. AIM STATEMENT To improve the longevity of peripherally inserted intravenous cannulas in sick neonates admitted to NICU (Neonatal Intensive Care Unit) of this tertiary care unit from the current 25.7 hours to 36 hours or more, over a span of 6 weeks-between first week of January and second week of February 2022. METHODS This study was begun at a level III NICU, in western Maharashtra, India, in January 2022.The unit is staffed by resident doctors enrolled in a postgraduate training programme.The unit has a bed strength of 58, and a turnover of 200-250 babies per month, requiring an average insertion of 42 cannulas per day.This study was planned as a quality improvement (QI) project.Patients were not directly involved in designing, conducting, reporting or dissemination of any plans of this research. A QI team was formed, with the following members and designated roles, as shown in table 1. The cannula fixation technique in practice at the NICU was as follows: Butterfly flap fixation 1.All equipment was collected in a sterile tray.After surgical handwashing, sterile gloves were donned by the resident doctor and the assisting staff nurse.The baby was given a sucrose-swab.2. The chosen site was cleaned with three swabs-a spirit swab, a swab dipped in povidone-iodine and finally a spirit swab, allowing the area to air-dry after each application.3. Cannula was inserted, keeping the device parallel to skin surface, to prevent a second puncture to the vessel.4. Confirmation of cannula position was done by slowly flushing the cannula with 0.9% normal saline using a 1cc/2cc syringe.Smooth injection, causing no discolouration, pain or swelling confirmed the cannula being in situ. 5.A transparent sterile dressing was applied over the cannula, covering it from the point of insertion till the wings.6.A strip of adhesive tape (Micropore) was crossed over the wings of the cannula.Another strip of adhesive tape was used to stabilise the cannula wings onto the skin (figure 1).Micropore is a paper-based adhesive tape (1 inch wide), with a gentle glue, suitable for neonatal skin.However, the authors noticed that butterfly-flap fixation allowed a lot of movement of cannulas with the movement of the baby as shown in figure 1. Fishbone analysis and five WHYs technique were used to formulate a list of factors that affected the longevity of intravenous cannulas, as summarised in figure 2. The team concluded that achievement of better fixation of cannulas was the target of their QI project through PDSA (Plan-Do-Study-Act) cycles.At the end of each PDSA cycle, ideas were either Adapted, Adopted or Abandoned.The QI team had team meetings every week where team members celebrated little victories, analysed failures, troubleshot for problems and brainstormed for newer alternatives. These cycles and their outcomes are summarised in table 2. Open access A total of 139 cannulas were charted over 6 weeks (first week of January till second week of February 2022).Twelve inserted cannulas were excluded from analysisall these cannulas were usable but had to be removed at the time of baby's transfer/death. At the beginning of the study, the unit had an average life of intravenous cannula at 25.7 hours (recorded by averaging the longevities of 67 cannulas inserted in 20 randomly chosen neonates during the first week of the QI project, first week of January 2022).These 67 cannulas could have been inserted by anyone working in the NICU at that time and not necessarily the resident member of the QI-team-a measure to eliminate bias and to estimate the unit's true average longevity. Between weeks 2 and 6 of QI, 60 cannulas were inserted only by the resident member of the QI team, to ensure all Open access cannulas analysed were inserted according to the decided protocol.All 60 cannulas were used to calculate longevities and for all statistical inferences of PDSA cycles.The authors had originally planned for the sustenance phase to start immediately after the end of testing the PDSA cycles which however got delayed due to an unforeseen shortage of manpower in the unit.Sustainability was studied between April and October 2022.During this period, effectiveness of 'Fixomull-Fixation' was studied by averaging 30 randomly chosen cannulas every month, one cannula chosen every day for monitoring longevity. Open access All inserted cannulas were 24G plastic devices of various brands available in government supply.The unit had considered using 26G cannulas but rejected the idea after trying a few numbers due to poor quality of products available in supply. Patient parameters recorded for each inserted cannula were sex, gestational age, weight at time of cannulation, content infused through cannula-namely total parenteral nutrition (TPN), clear fluids or others (blood products, antibiotics, bolus injections, inotropes), and whether cannula was usable at the time of removal or was removed due to dislodgement/extravasation.Respective cannula hours were calculated for each cannula. The team finally decided on the following improvised technique for cannula insertion, as shown in figure 3. Fixomull-Fixation The first four steps (till figure 3A, figure 3B decribed by point 2 and 3 of decription of the Butterfly Flap Fixation) are identical to the procedure described earlier (figure 1).Open access 5.A transparent sterile dressing is applied over the cannula covering it from the point of insertion till the wings (as shown in figure 3C).6.Two pieces of quadrangular adhesive dressings (Fixomull) of size 3 cm×3 cm, are used to immobilise the wings of the cannula over the skin, overlapping over each other (as shown in figure 3D,E).7. The point of insertion of the cannula is kept exposed for examination for any signs of extravasation or swelling.Fingertips are left exposed to monitor for any signs of vascular compromise.8.A compulsory splint is applied to immobilise the joint over which cannula is inserted (as shown in figure 3F).9.A compulsory 10 cm extension with a three-way stop cock is attached to the cannula and fixed to the splint (as shown in figure 3G,H).10.An adhesive tape is applied over the extension tubing to prevent disconnection of extension from the cannula hub, and therefore, reduce any movement at the point of entry of the cannula into the lodging vein.Fixomull is a cloth based woven adhesive dressing available in 10 cm×10 m. Once the unit finalised the fixation technique, new batches of residents had to be trained for the same.This was accomplished in a compulsory orientation session taken for all newly joined at the NICU, with a video-clip of Fixomull-Fixation.New joiners were rotated with the previous team for a period of 1 week where they learnt fixation under supervision of the previously trained team.The unit continues to train new residents in this manner. The unit protocol is to administer infusions through a syringe pump, using a 50cc/20cc syringe coupled with a 50 cm/100 cm extension line, attached to the 10 cm extension fixed with the cannula at the time of cannulation.For flushing cannulas, 0.9% normal saline is injected using a 1cc/2cc/5cc syringe.Saline is pushed slowly in a single push.In case of resistance, flushing is withheld and a second opinion is sought from another staff member.If the second opinion is 'difficult-flush', the cannula is replaced.The unit does not have a protocol on using heparin-lock on peripheral cannulas, only on central catheters. The unit aimed to achieve an improvement in cannula longevity as the primary outcome.A secondary outcome that the unit aspired to achieve was reduction in total number of cannulas used and hence cutting down on number of pricks that a baby would require during NICU stay. Unpaired t-test was applied to analyse statistical significance of Fixomull-Fixation.Various patient-specific parameters were individually studied to find their association with cannula longevity.A χ 2 test was used to determine how Fixomull-Fixation impacted reasons behind cannula removal. RESULTS The QI team achieved an increase in cannula longevity from existing 25.7 hours to 39.6 hours over a period of 6 weeks, that is, 50% increase in 6 weeks-first week of January-second week of February 2022.The maximum monthly average of cannula-longevity was documented at 42.1 hours (July 2022, sustenance phase).The change in fixation brought about an absolute increase in cannula longevity in all cannulas irrespective of baby's birthweight or gestational age or contents being infused through the cannula. The authors attribute this result, to improved mechanical fixation of cannulas, which was statistically significant (p=0.0006 at 95% CI of −20.772 to −5.028). The statistically significant categories of intravenous cannulas are discussed below.Figure 4 gives a graphical representation of the same.The patient demographics and impact of Fixomull-Fixation on longevity are summarised in table 3. Its sustainability and trends of average cannula longevity over months following the QI study were as shown in figure 5. Where, baseline longevities are average cannula longevities in first week of January 2022 and postintervention longevities are those documented at the end of week sixth week of the QI project.'Intervention' is 'Fixomull-Fixation' defined earlier. The unit achieved a near 50% increase in cannula longevities which translated to approximately 33% reduction in the number of cannulas required in the NICU.This translates to a 33% reduction in the number of pricks, and therefore, pain that a baby is subjected to during NICU stay.Also, longevities of all cannulas improved, irrespective of the contents infused through them.Open access DISCUSSION This study has been undertaken and reported and according to the SQUIRE 2.0 (Standards of QUality Improvement Reporting Excellence 2.0) guidelines 7 . Mean longevities of cannulas used for various fluids (clear fluids, TPN, inotropes, blood products, antibiotics, etc), inserted in neonates of various gestational ages and birthweights improved individually, but not all of them were significant improvements. The most common reason for removal of cannulas in the unit remained extravasation injury.This is a nonmodifiable factor in any NICU with babies having fragile skin, thin and easily distensible subdermal tissues and thin walled, delicate blood vessels as demonstrated by Odom et al. 8 Cannulas used for clear intravenous fluids were found to have benefited the most from mechanical fixation when compared with cannulas used for TPN and other injectables.This was because clear infusions were least irritant to the neonatal blood vessels. Cannulas used for TPN showed local inflammation and extravasation because of the high osmolarity of the infused fluid and many had to be removed though they hadn't been dislodged.This finding is also consistent with a study by Fessler and Rejrat, 9 which described the complication of venous lines with high osmolar solutions like TPN getting infused in neonatal ICUs.Administration of blood products, injectables such as antibiotics and electrolyte corrections given as short boluses left cannulas unused for long stretches of their indwelling time.This led to cannula blockage and flushing the cannula regularly with saline could not be adequately practiced in the unit.Usefulness of intermittent flushing in maintaining the patency of intravenous cannula as demonstrated by Uma et al 10 and a standard practice in many units, could not get consistently practised at the study centre. The most significant impact of the Fixomull-Fixation came to be seen on cannulas infusing clear fluids.Statistically significant p values could be documented only for clear-fluid cannulas, because the number of cannulas in the baseline and during PDSAs was comparable for cannulas used for clear fluids only.However, baseline longevities of all cannulas and the absolute increase in average cannula longevities were, respectively, comparable for all cannulas included in the study.Therefore, this QI study infers that improving mechanical fixation of cannulas had a key role improving the longevities of intravenous cannulas, whatever be the choice of infusion. Dalal et al demonstrated that splints have no effect on longevity of cannulas. 11However, splinting cannulas remained a constant practice in this study.An advantage of using splints that the QI team found was that splints provided an additional surface for anchoring the 10 cm three-way extension, thereby reducing the direct strapping to the baby's limb, and therefore, making the assembly relatively more comfortable for the neonate. The impact of mechanical stabilisation was best appreciated in neonates of 28-32 weeks gestation (with p=0.01) and less than 1000 g birthweight (with p=0.003).In babies <28 weeks of gestation, cannula longevities increased from 26.3 hours to 66.5 hours (2.5×increased longevity).These tiny infants are relatively less active than neonates of higher weights and greater gestational maturity.The authors infer from the study that anchoring cannulas firmly to the baby reduced movement of cannulas at the point of insertion and hence improved cannula longevities tremendously. Sex of neonates had no impact on longevity of cannulas-Fixomull-Fixation was equally effective in male and female babies (with p=0.0009 and p=0.04, respectively). Open access An increase in cannula longevity from 25.7 hours to 39.6 hours translates approximately to one cannula a day to 2 cannulas in 3 days.This is a 33% reduction in the number of pricks that a baby must endure during the NICU stay.Documenting an improvement in pain outcomes or reduced duration of NICU stay due to lesser trauma/local site infection was beyond the scope of this QI study but is certainly an area of interest for the team. At the time when the QI was in progress, the unit had been facing a shortage of manpower and was forced to reduce use of PICC lines due to a lack of personnel trained in inserting them.This led to an increased dependency on peripheral cannulas.Hence, the authors believe that the highest motivation to work on improving cannulalongevities came from a need becoming a necessity. Planning and improvising fixation techniques was achieved in 6 weeks after which the unit laid down a written protocol for inserting and fixing intravenous cannulas.The entire staff was trained to insert and anchor cannulas according Fixomull Fixation method.Improvement in average longevity of intravenous cannulas was sustained over the coming months. The only deterioration in cannula longevities was noted at the time of rotation of resident doctors in the NICU when trained teams were replaced by teams needing orientation.The QI team decided that future teams be oriented to Fixomull-Fixation a little ahead of their tenure in NICU since these rotations at the teaching institute are planned in advance. Strengths Fixomull-fixation is easy to replicate and sustain even in the face of changing teams which is inevitable in teaching hospitals.It led to near 50% increase in cannula longevity, therefore, substantially reducing the average number of cannula insertions required per day, bringing down the numbers of cannulas that the hospital procured. The unit found Fixomull Fixation to be sustainable.The unit did not require any additional funding to carry out the QI or the sustenance phase.In fact, the unit reported an approximate reduction of 33% in the requirement of cannulas, compared with numbers required prior to the QI initiative.Although the unit did not undertake a formal cost analysis of the QI and sustenance periods, there is indirect evidence of the adopted practice being cost-effective. Limitations The unit noticed that administering TPN through peripheral cannulas led to a significantly greater number of extravasation injuries to the neonates.However, PICCs could not always be planned for all candidate babies, due to financial constraints and unavailability of personnel trained in inserting PICCs.The unit planned for its babies to be shifted to enteral nutrition more aggressively, in order to come down on the requirement of TPN and PICCs.Therefore, cannulas through which TPN was administered could not be compared reliably between cannulas having butterfly-flap fixation and Fixomull-Fixation Once the unit is adequately trained in inserting PICCs, the authors would like to explore the possibility of achieving an even higher average longevity of intravenous cannulas, and reach a 60-hour or the ideal 72-hour target. 12Further, the authors would also like to study any improvement in the numbers of grade 3/4 extravasations with Fixomull-Fixation, which was beyond the scope of this study. CONCLUSIONS Fixomull-Fixation was found to be more technically sound, with a simple learning curve and proved to be a sustainable practice.Better mechanical fixation of intravenous cannulas improved cannula longevity from 25.7 hours to 39.6 hours making the QI project a success.The unit achieved a near 50% increase in cannula longevities and a near 33% reduction in the total number of cannulations needed. The study inferred that improvement in mechanical fixation did not prevent cannula complications arising due to the nature of fluid infused through cannulas or due to disuse of cannulas for a long time.Thus, mechanical fixation is an independent factor, modifying longevity of cannulas. The authors also propose that improving mechanical fixation is likely to have a profound impact on peripheral centres and low-resource settings and particularly during transit in a referral chain, especially in India where there is a perpetual shortage of manpower trained to handle newborns in remotely located health facilities. Figure 1 ( Figure 1 (A) Butterfly-flap fixation and (B) excessive movement of the inserted cannula. Figure 3 Figure3Technique of cannula fixation finally adopted by the unit at the end of PDSAs-called the 'Fixomull-Fixation' in this article and analysis.PDSA, Plan-Do-Study-Act.Parts A&B are corresponding to the 2nd and 3rd point in the description of the butterfly flap technique described previously.The suggested edit is as follows.The first four steps (till figure3A, figure3B) are identical to the procedure described for insertion with butterfly flap technique (figure1). Figure 5 Figure 5 Trends of cannula longevity during the QI testing and sustenance phase.QI, quality iImprovement.NICU, Neonatal Intensive Care Unit. Table 1 The QI team member and their roles Table 2 The PDSA cycles (online supplemental file 1) Table 3 Demographics, number of cannulas (Num), average cannula longevities (ACL) and impact of intervention *p value <0.05 at 95% confidence interval, implying statistical significance of the fixomull-fixation as an intervention TPN, total parenteral nutrition.
2023-11-22T06:17:39.010Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "c83038b234b55ea96c634b1118b61664337cf6cb", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "04e12d399142eccd649b6f242a453fb54d3e15a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218980617
pes2o/s2orc
v3-fos-license
Convalescent (immune) plasma treatment in a myelodysplastic COVID-19 patient with disseminated tuberculosis During the ongoing COVID-19 pandemic due to the SARS-CoV-2 virus of which evidence-based medical paradigms cannot be easily applied; difficult clinical decisions shall be required particularly in the 'difficult-to-treat' cases of high risk group with associated comorbidities. Convalescent immune plasma therapy is a promising option as a sort of 'rescue' treatment in COVID-19 immune syndrome, where miraculous antiviral drugs are not available yet. In this report, we aim to convey our experience of multi-task treatment approach with convalescent immune plasma and anti-cytokine drug combination in a COVID-19 patient with extremely challenging comorbidities including active myeloid malignancy, disseminated tuberculosis and kidney failure. Introduction COVID-19 immune syndrome is a multisystemic disorder following the infection of the SARS-CoV-2 virus [1]. The syndrome mainly affects the lungs but other systems including the myocardium [2], central nervous system [3], liver [4], kidneys [5], bone marrow [6], and cutaneous tissues [7] are also under attack. SARS-CoV-2 virus-induced immune suppression and the impaired exaggerated pathological immune multisystemic attacks with macrophage activation are the hallmarks of the COVID-19 immune syndrome [8]. There are not sufficient randomized controlled trials for the treatment of COVID-19 since it is not easy to establish a classical evidence-based medicine approach due to the emergency conditions caused by the pandemic. Case reports and case series are the currently available clinical evidence particularly for the passive immunity transfer namely convalescent immune plasma therapy. Given the absence of effective miracle antiviral drugs, convalescent plasma therapy is one of a few promising treatments for the specific COVID-19 management [9]. Among the studied COVID-19 subpopulations, patients with comorbidites like maligancy and kidney disease constitute the high-risk group for the poor clinical outcomes [10]. In this critical subpopulation, there is no data regarding the treatment with immune plasma products. We, herein, report an immunocompromised patient due to myelodysplastic syndrome (MDS/RAEB1 FAB subtype), complicated by recently disseminated tuberculosis with associated kidney disease, and attacked by SARS-CoV-2 leading to COVID-19 syndrome, which was successfully managed via the administration of double convalescent immune plasma therapies. Elucidation of the exact administration schedule of convalescent immune plasma in COVID-19 immune syndrome is important for the true management of that difficult-to-treat disease states, which have a potential of morbidity and mortality. Case presentation A 55-year-old male with a history of MDS with FAB refractory anemia excess blasts-1 subtype (MDS/ RAEB-1) complicated by disseminated systemic tuberculosis and associated kidney disease was admitted to our hospital with the complaints of ongoing high fever and persistent cough lasting for about three days. He had recently been discharged from the hospital after a follow-up of two months for the disseminated tuberculosis infection and still was on the classic fourdrug regimen (isoniazid, rifampin, pyrazinamide, ethambutol). T two and a half years ago in another clinic, of which he had been followed at three-months of intervals without any specific MDS-directed therapeutic intervention. Upon the admission, physical examination revealed that he had a fever of 38.8°C, tachycardia (121 bpm), and tachypnea (24 breaths per min). The oxygen saturation was 95 % in the room air. Low-dose chest computed tomography disclosed bilateral multiple peripheral multifocal ground-glass opacities, which indicated COVID-19 pneumonia ( Fig. 1). Just after obtaining the nasopharyngeal swab sample, which was later found to be positive for COVID-19 infection, he was urgently hospitalized. Complete blood counts revealed leukocytosis, lymphopenia, neutrophilia and eosinophilia, with a high ferritin, C-reactive protein (CRP), D-dimer and LDH. The values of the laboratory tests are given in Table 1. Basal corrected QT interval (QTc) was calculated as 488 milliseconds in the electrocardiogram (ECG). Based on those examinations, an antiviral drug, favipiravir was prescribed solely. The hydroxychloroquine/ azithromycin combination could not be administered due to the prolonged basal QTc. 4-drug regimen for tuberculosis was continued as well. Since he had a high neutrophil count and a procalcitonin level, complicating bacterial infection could not be ruled out and meropenem was added to the treatment scheme on the second day of antiviral treatment after obtaining the blood, urine and sputum cultures. On the fifth day of the symptom-onset, the patient complained of severe dyspnea and became tachypneic (26-28 breaths per min). He still had a fever of 39.5°C. Taking his significant comorbidities and immunocompromised state into consideration, previously-stored 200 mL of convalescent plasma product was transfused to our patient by following universal infusion safety protocols without any adverse reaction or complication. The convalescent plasma product was collected using Trima Accel® Automated Blood Collection System from a donor who had previously recovered from COVID-19 disease and met universal donation criteria. The anti-SARS-CoV-2 IgG semi-quantitative titer of the donor's plasma studied by the EUROIMMUN ELISA kit (Order no EI 2606-9601 G. Produced by EUROIMMUN AG, Seekamp 31, 23560 Luebeck, Germany) was found to be positive (Titer 6.6; < 0.8 negative, ≥0.8 to < 1.1 borderline, ≥ 1.1 positive) before collection. During the follow-up period, he complained of progressive dyspnea again. Tachypnea (30-32 breaths per min) and fever did not resolve as well. The oxygen saturation was 90 % at the room air, hence oxygen supplementation of two L/min with nasal cannula was initiated on the fourth day of the antiviral treatment. Taking the clinical deterioration into consideration, the patient was transferred to the intensive care unit (ICU) and another 200 mL of convalescent immune plasma which was obtained from the same donor was transfused again. Since the acute phase reactants namely CRP, D-dimer, and ferritin remained high, lymphocyte count began to further decrease (0.89 × 10 3 /μL) and persistent fever was evident; macrophage activation syndrome (MAS) was suspected. H-score for the reactive hemophagocytic syndrome was calculated as 169 points indicating the probability of MAS as 40-54 %. Serum IL-6 level was also high (72.2 pg/mL; normal range: 0-5.9). Therefore, tocilizumab 400 mg single dose was also given to manage the MAS status on the fourth day of the antiviral treatment. Since the patient had already been under the effective treatment for tuberculosis for more than one month, the administration of tocilizumab was considered safe in this situation. However, as there was still a risk of tuberculosis reactivation, written informed consent was also obtained from the patient before the administration of the tocilizumab. On the following days of admission, significant improvement in the general health status of the patient was observed via the administration of those multi-task clinical management approaches. Furthermore, anti-SARS-CoV-2 IgG test, which was studied by the previous same method was found to be positive as the titer of 5.1. His dyspnea improved and no febrile values were recorded. His respiratory rate was 24-26 breaths per minute and the oxygen saturation was above 95 % in the room air. Lymphocyte count increased gradually (up to 1.58 × 10 3 /μL) and CRP values decreased (down to 1.03 mg/dL). Favipiravir treatment was terminated on the seventh day. Since Klebsiella pneumoniae was also detected in the sputum culture, meropenem treatment was continued up to seven days as well. The patient was transferred back to the standard care ward from the ICU. The second PCR for SARS-CoV-2 was negative. After two days of the follow-up, he was discharged from the hospital with full recovery from the COVID-19 immune syndrome. Alterations of the absolute lymphocyte count and maximum body temperature throughout the clinical follow-up are shown in Fig. 2. Discussion The ultimate growing hypothesis of this case report is that immune plasma treatment together with other multi-task clinical approach could be useful for the management of COVID-19 immune syndrome within 'difficult-to-treat' patients due to cancer, kidney failure and concurrent specific infection such as tuberculosis. The safety profile of this clinical approach is also acceptable since adverse events, intolerances, or toxicities were not observed regarding the double-administration of immune plasma treatment combined with anti-cytokine drug. Our COVID-19 patient truly represents a quite difficult subgroup to treat [10], since he already had acquired immunodeficiency due to active MDS/RAEB-1 myeloid neoplastic disease with disseminated tuberculosis infection together with renal injury complicated by multiorgan involvement. This is just a single case report and those positive results inherently cannot be generalized to all immunocompromised patients with COVID-19. Nevertheless, in this period when treatment results with strong evidence cannot be obtained, it's hard to find satisfactory options [11] and such examples are encouraging for the convalescent immune plasma therapies. Another confounding clinical superimposed picture is the macrophage activation syndrome (MAS), which is linked to the severe acute phase reaction with IL-6 increment [8]. In order to break this link of the counterproductive chain, we preferred to administer the quite risky drug of tocilizumab in this already acquired immunodeficient person [12]. Meanwhile, significant improvements in both clinical findings and laboratory values of the patient took place just after the combination of double immune plasma infusion and tocilizumab. Of course, it may not be easy to distinguish which of the main positive effect depends on those 'paradoxical' treatment choices (passive immune transfer versus immunosuppression). As he was a difficult patient to treat, we clinically decided to use the available options combined. The use of tocilizumab in immunocompromised patients is also a quite difficult decision, especially when there is a recent history of disseminated tuberculosis under active drug-combination treatment. In everyday medical practice particularly during the ongoing COVID-19 pandemic, sometimes it may be necessary to make difficult decisions to treat complicated patients with the infection. Hypothetically, the use of the immune plasma products may have also served as a buffering factor for the potential adverse effects of immunosuppressive medications such as anti-cytokine biological drugs. Of course, it is not possible to present this speculation objectively and further research is clearly needed. In all of the present expert opinion-based clinical decisions, the combined multi-task clinical approach was applied to our present patient and the results were positive on this single case basis. The contribution of the combined treatment of anti-cytokine treatments and convalescent immune plasma to COVID-19 management needs to be confirmed by future controlled studies. Declaration of Competing Interest The authors declared no conflict of interest.
2020-05-30T05:09:46.576Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "ca4245aa052e4fa3cb14a36233689e790d074e2a", "oa_license": null, "oa_url": "http://www.trasci.com/article/S1473050220301166/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ca4245aa052e4fa3cb14a36233689e790d074e2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
186203592
pes2o/s2orc
v3-fos-license
The Mechanism of the Propagation in the Anionic Polymerization of Polystyryllithium in Non-Polar Solvents Elucidated by Density Functional Theory Calculations. A Study of the Negligible Part Played by Dimeric Ion-Pairs under Usual Polymerization Conditions The elementary processes occurring in the anionic polymerization of styrene with dimerically associated polystyryllithium (propagation during the anionic polymerization of dimeric polystyryllithium) in the gas phase and cyclohexane were studied using MX062X/6-31+G(d), a recently developed density functional theory (DFT) method and compared with the polymerization of styrene with non-associated polystyryllithium, which was described in a previous study. The most stable transition state in the reaction of styrene with dimeric polystyryllithium has a structure in which the side chains of styrene and the two chain end units of polystyryllithium are located in the same direction around the Li atom near the reactive site. The relative enthalpy for this transition state in cyclohexane is 28 kJ·mol−1, which is much lower than that for the reaction of non-associated polystyryllithium (51 kJ·mol−1). However, the relative free energy (which determines the rate constant) for the former is 93 kJ·mol−1, which is greater than that for the latter by 7 kJ·mol−1, indicating that the latter reaction (reaction with non-associated polystyryllithium) is advantageous over the former (reaction with dimeric polystyrylllithium). Their rates of reaction are also affected by initiator concentrations; in the case of reactions with low initiator concentrations, from which high molecular weight polymers are usually obtained, the rate of reaction corresponding to non-associated polystyryllithium is much larger than that corresponding to dimeric polystyryllithium. Introduction In the case of the anionic polymerization of styrene in non-polar solvents, it is generally accepted that polystyryllitium is mainly associated into dimeric species (PStLi) 2 in equilibrium with a small amount of non-associated PStLi chains [1][2][3][4][5]. A kinetic order of 0.5 with respect to [PStLi] for this reaction indicates that only non-associated PStLi ion-pairs are able to propagate [6,7]. However, the possibility of a reaction of styrene with dimeric (PStLi) 2 or higher aggregates was proposed based on experimental data, such as the addition of butadiene to freeze-dried polystyryllithium and the As described earlier, some researchers are working on the polymerization of styrene with dimeric or higher species. However, it is not clearly shown how and why the dimeric and higher species are more reactive than the non-associated species. The aim of this study is to clarify the mechanism of the anionic polymerization of styrene with dimeric polystyryllithium using DFT calculations in a manner similar to those performed for non-associated polystyryllithium, as described in our previous study [20]; further we intend to compare and contrast the differences between the reactions of the dimeric species and non-associated species. Methods Polystyryllithium species obtained by the addition of styrene to alkyllithium can be denoted as R(St) m Li, where R denotes the alkyl group of the initiator. We employed HStLi by setting m = 1 and substituting H for R. Using this structural model, the addition of styrene to dimerically associated polystyryllithium (a propagation reaction) was studied (in our previous study, we set m = 1 and 2 and compared their transition states in order to understand the effect of the penultimate unit and elucidate the reaction mechanism of styrene polymerization with non-associated polystyryllithium). Our purpose in this investigation is to compare the reactions of non-associated and dimerically associated polystyryllitium. Actual calculations on the transition state of St/(HSt 2 Li) 2 (m = 2) are rather complicated and difficult to perform, therefore we set m = 1 and compared the obtained results of St/(HStLi) 2 with those of St/HStLi (m = 1) performed in the previous study. In this study, calculations were performed using the M062x/6-31+G(d)//M062x/6-31+G(d) method [21] with the Gaussian 09W program [25], in essentially the same manner as those performed in our previous study. The dimeric species ((HStLi) 2 ) and intermediate complexes (precursor complexes, transition states, etc.) and products of the reaction of (HStLi) 2 with styrene were optimized, and the obtained geometries, and enthalpy and Gibbs free energy values at 25 • C were used for discussion. The transition states were confirmed to have one imaginary frequency. The precursor complexes and products were obtained by first applying the intrinsic reaction coordinate (IRC) approach [26][27][28] to the transition states and then optimizing the obtained intermediate structures completely. For calculations in cyclohexane, the integral equation formalism (IEFPCM) variant module of the polarizable continuum model (PCM), a widely used method, was employed [29,30]. In order to compare the stabilities of the structures in the gas phase, the values of relative enthalpy in the gas phase (∆H r ) were calculated using the obtained values of enthalpy (H) shown in Table A1 of Appendix A.1, according to the following procedure: where H[(HStLi) 2 ] denotes the enthalpy of a particular (HStLi) 2 , and H[(HStLi) 2 ] 0 is the enthalpy of the most stable (HStLi) 2 that has the lowest free energy among the studied dimers. For the most stable dimer, ∆H r = 0, and for other dimers, ∆H r indicates the extent of instability (in a thermodynamic sense) of the particular dimer, (HStLI) 2 , with respect to the most stable dimer. where H[St/(HStLi) 2 ] denotes the enthalpy of a particular St/(HStLi) 2 system and H(St) is the enthalpy of styrene. In this case, ∆H r actually indicates the extent of instability of the particular St/(HStLi) 2 system with respect to the starting material (styrene and dimeric (HStLi) 2 ). ∆H r for the transition state corresponds to the apparent activation energy of the reaction. The values of relative free energy in the gas phase (∆G r ) were also calculated in the same manner as those of ∆H r , using the values of G shown in Table A1. ∆H rch and ∆G rch , the values of relative enthalpy and free energy, respectively, in cyclohexane, were calculated in a manner similar to that used to calculate ∆H r and ∆G r in the gas phase, but using the values of H and G in cyclohexane (Table A2, Appendix A.1). In the geometries of all the studied structures, C-Li with distances less than 0.245 nm were marked with full or dotted lines as η-coordinated bonds. Results and Discussion 3.1. Addition of Styrene to (HStLi) 2 in the Gas Phase (HStLi) 2 . To study the addition of styrene to dimeric polystyryllithium, several types of suitable dimeric species should be chosen. To this end, a series of dimeric (HStLi) 2 used in our previous study on the addition of styrene to non-associated polystyryllithium was employed (Figure 1, 1-a to 1-f). (1) Structures 1-a to 1-d exhibited sandwich-type features, while 1-e and 1-f were six-membered structures like the four-membered structure of alkyllithium dimers. The former four (HStLi) 2 with sandwich-type structures have much lower ∆H r and ∆G r values than the latter two six-membered structures. (2) In 1-a and 1-b, the side chains of the HSt-groups (chain end units) were located near each end of Li−Li one after another, while in 1-c and 1-d, they were located near one end of Li−Li. 1-a and 1-b exhibited lower ∆H r and ∆G r when compared to 1-c and 1-d. Optimized geometries and relative energies of (HStLi)2 in the gas phase originally shown in our previous paper. The small drawing on the top of each structure represents simplified overhead view of the lower drawing; carbon atoms in one of the HSt-groups (the upper HSt-group in the lower drawing) are colored in blue. In the lower drawing, C−Li distances less than 0.225 nm are shown. ∆Hr and ∆Gr, which represent the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . Transition state. There are several transition states for the addition of styrene to each (HStLi)2 shown in Figure 1. These transition states are determined depending on the arrangement of their groups, i.e., whether styrene is coordinated with two Li atoms or only one Li atom, whether the side chain of styrene approaches in the same direction or opposite direction with respect to the side chain of the reacting HSt-group, and from which part of the phenyl group styrene approaches Li away from the reaction site (from C2-C3 or C5-C6 as shown in the drawing in Table 1) if it is coordinated with two Li atoms. In the case of (HStLi)2(1-a), which has the lowest ∆Gr value, the possible transition states for the addition of styrene were optimized and the typical transition states are shown in Figure 2 and Table 1 (2-a to 2-d). In each structure, the drawing on the left side represents the side view of the drawing on the right side and the upper drawing is the overhead view of the lower drawing. The carbon atoms of styrene are colored in blue, while those of the reacting HSt-group are brown colored. The blue arrows in each drawing show the main displacement vectors corresponding to the imaginary frequency of the transition state. Accordingly, the blue arrows at the α-carbon of the side chain of the reacting HSt-group and the terminal carbon of the side chain of styrene indicate that these two carbon atoms react in the direction of the arrows to form the product. The relative enthalpies (∆Hr), which are expected to correspond with the apparent activation energies of the reaction, and the relative free energies (∆Gr), which determine the rate constants, of these transition Optimized geometries and relative energies of (HStLi) 2 in the gas phase originally shown in our previous paper. The small drawing on the top of each structure represents simplified overhead view of the lower drawing; carbon atoms in one of the HSt-groups (the upper HSt-group in the lower drawing) are colored in blue. In the lower drawing, C−Li distances less than 0.225 nm are shown. ∆H r and ∆G r , which represent the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . Transition state. There are several transition states for the addition of styrene to each (HStLi) 2 shown in Figure 1. These transition states are determined depending on the arrangement of their groups, i.e., whether styrene is coordinated with two Li atoms or only one Li atom, whether the side chain of styrene approaches in the same direction or opposite direction with respect to the side chain of the reacting HSt-group, and from which part of the phenyl group styrene approaches Li away from the reaction site (from C2-C3 or C5-C6 as shown in the drawing in Table 1) if it is coordinated with two Li atoms. In the case of (HStLi) 2 (1-a), which has the lowest ∆G r value, the possible transition states for the addition of styrene were optimized and the typical transition states are shown in Figure 2 and Table 1 (2-a to 2-d). In each structure, the drawing on the left side represents the side view of the drawing on the right side and the upper drawing is the overhead view of the lower drawing. The carbon atoms of styrene are colored in blue, while those of the reacting HSt-group are brown colored. The blue arrows in each drawing show the main displacement vectors corresponding to the imaginary frequency of the transition state. Accordingly, the blue arrows at the α-carbon of the side chain of the reacting HSt-group and the terminal carbon of the side chain of styrene indicate that these two carbon atoms react in the direction of the arrows to form the product. The relative enthalpies (∆H r ), which are expected to correspond with the apparent activation energies of the reaction, and the relative free energies (∆G r ), which determine the rate constants, of these transition states are also shown in Figure 2 and Table 1. In each of the transition states 2-a and 2-b, styrene is coordinated with the two Li atoms, and the styrene side chain approaches from a direction opposite to that of the side chain of the reacting HSt-group. They are different in terms of the portion of styrene coordinating with the Li atom away from the reactive site (C2-C3 or C5-C6), as clearly shown in the left side drawings of 2-a and 2-b. In 2-c, styrene is coordinated with two Li atoms and the side chain of styrene approaches in the same direction as the side chain of the reacting HSt-group, while in 2-d, styrene is coordinated with only the Li atom near the reactive site and the side chain of styrene approaches in the same direction as that of the reacting HSt-group. Comparing the ∆H r and ∆G r values of these transition states, it can be observed that the transition states in which styrene is coordinated with two Li atoms in a direction opposite to that of the reacting HSt-group, i.e., 2-a and 2-b, have lower ∆H r and ∆G r values than other transition states. In 2-a and 2-b, the phenyl group of styrene approaches Li from a different portion and calculations were conducted to determine which of them results in a lower ∆G r for the transition state. Table 1. Arrangement of the transition states shown in Figure 2 for the addition of styrene to (HStLi) 2 (1−a). Figure 3 for the addition of styrene to each (HStLi)2 in Figure 1. Item Detail 2-a 3-b 3-c 3-d 3-e 3-f (HStLi)2 Type of (HStLi)2 (2) is the Li atom away from the reactive site. b Opp.: Opposite c Numbering of carbon atoms is shown below. d Distance between the two carbon atoms participating in the reaction. As described earlier, 3-c and 3-d possess lower ∆Hr and ∆Gr than other transition states. In these transition states all three side chains, i.e., those of styrene plus two HSt-groups, are located around the Li atom near the reaction site as clearly shown in 3-g and 3-h (a view from another point for 3-c and 3-d, respectively). This placement may be responsible for the low ∆Gr of these transition states. In 3-c, the three said side chains are located in the same direction to Li−Li as shown in 3-g, while in 3-d, the side chains of the two HSt-groups interact face to face with each other (3-h). The structure of 3-c may have caused the oblique positioning of the reacting HSt-group with respect to Li−Li, parallel sandwiching of the phenyl groups of styrene and the unreacting HSt-group, resulting in a low ∆Gr value. Reaction path (comparison with the reaction of non-associated polystyryllithium). The pathway of the reaction system whose transition state is 3-c is shown in Figure 4; this system will be called system(dim-r) hereafter. The reaction proceeds in three steps. First, an initial complex is formed and it goes through the first and second steps to the final step (the precursor complex, transition state, and product). In Figure 4, only the initial complex and details of the final step are shown because the complete process is complicated and we can discuss the reaction path using the information in Figure For the other (HStLi) 2 moieties shown in Figure 1, the possible transition states were calculated in the same way as for those shown in Figure 2, and the transition states with the lowest ∆G r for each type of (HStLi) 2 are shown in Figure 3 and Table 2 as structures 3-b to 3-f, along with transition state 2-a that has the lowest ∆G r of St/[(HStLi) 2 (1-a)] system. In each of these transition states, styrene is coordinated with two Li atoms and the side chain of styrene is placed in direction opposite to that of the side chain of the reacting HSt-group, which is as expected. The ∆H r values of these transition states are low, around 22 (for 3-c) to 39 kJ·mol −1 (for 3-e), while the ∆G r values are fairly high, around 87 (for 3-c) to 113 kJ·mol −1 (for 3-e), and their Li−Li distances range from 0.25 to 0.35 nm. From the ∆G r values of these transition states, transition state 3-c was found to be the most stable, followed by 3-d, 3-b, 2-a, 3-f, and 3-e. In 3-c, the reacting HSt-group is situated oblique to the Li-Li line (red line in the upper drawing of structure 3-c); the planes of styrene and the unreacting HSt-group are situated parallel to each other (two red lines in the left drawing of structure 3-c) and the Li−Li distance is 0.35 nm, the longest of all transition states. The orders of magnitude of ∆H r and ∆G r of these transition states do not agree with the orders of magnitude of ∆H r and ∆G r values of the original (HStLi) 2 . For example, the ∆H r and ∆G r values of (HStLi) 2 (1-c) and (HStLi) 2 (1-d) are higher than those of (HStLi) 2 (1-a) and (HStLi) 2 (1-b) by about 10 kJ·mol −1 , as shown in Figure 1 Figure 3, by 3−11 kJ·mol −1 . However, the difference is not as high as that between ((HStLi) 2 (1-e) and (HStLi) 2 (1-f)) and ((HStLi) 2 (1-a) and (HStLi) 2 (1-b)), which is about 50 kJ·mol −1 , as shown in Figure 1. to that of the side chain of the reacting HSt-group. They are different in terms of the portion of styrene coordinating with the Li atom away from the reactive site (C2-C3 or C5-C6), as clearly shown in the left side drawings of 2-a and 2-b. In 2-c, styrene is coordinated with two Li atoms and the side chain of styrene approaches in the same direction as the side chain of the reacting HSt-group, while in 2d, styrene is coordinated with only the Li atom near the reactive site and the side chain of styrene approaches in the same direction as that of the reacting HSt-group. Comparing the ∆Hr and ∆Gr values of these transition states, it can be observed that the transition states in which styrene is coordinated with two Li atoms in a direction opposite to that of the reacting HSt-group, i.e., 2-a and 2-b, have lower ∆Hr and ∆Gr values than other transition states. In 2-a and 2-b, the phenyl group of styrene approaches Li from a different portion and calculations were conducted to determine which of them results in a lower ∆Gr for the transition state. Table 1 and also in the relevant description in the main text. Hydrogen atoms are not shown. ∆Hr and ∆Gr, the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . Table 1 and also in the relevant description in the main text. Hydrogen atoms are not shown. ∆H r and ∆G r , the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . As described earlier, 3-c and 3-d possess lower ∆H r and ∆G r than other transition states. In these transition states all three side chains, i.e., those of styrene plus two HSt-groups, are located around the Li atom near the reaction site as clearly shown in 3-g and 3-h (a view from another point for 3-c and 3-d, respectively). This placement may be responsible for the low ∆G r of these transition states. In 3-c, the three said side chains are located in the same direction to Li−Li as shown in 3-g, while in 3-d, the side chains of the two HSt-groups interact face to face with each other (3-h). The structure of 3-c may have caused the oblique positioning of the reacting HSt-group with respect to Li−Li, parallel sandwiching of the phenyl groups of styrene and the unreacting HSt-group, resulting in a low ∆G r value. Reaction path (comparison with the reaction of non-associated polystyryllithium). The pathway of the reaction system whose transition state is 3-c is shown in Figure 4; this system will be called system(dim-r) hereafter. The reaction proceeds in three steps. First, an initial complex is formed and it goes through the first and second steps to the final step (the precursor complex, transition state, and product). In Figure 4, only the initial complex and details of the final step are shown because the complete process is complicated and we can discuss the reaction path using the information in Figure 4 (the complete reaction path is shown in Figure A1 of Appendix A.2). It can be observed from Figure 4 that the values of ∆H r were very low, as (HStLi) 2 forms the initial complex, precursor complex, transition state, and product without dissociating into non-associated HStLi. The ∆H r value corresponding to the initial complex was −19 kJ·mol −1 , indicating an exothermic phenomenon. The ∆G r values were relatively high, which will be discussed in later sections. The distance between the two carbon atoms participating in the reaction becomes shorter as the reaction proceeds, from 0.37 nm for initial complex 4-a, through 0.24 nm for transition state 3-c, to the normal single bond distance for product 4-d (0.155 nm). Table 2. Arrangement of the transition states shown in Figure 3 for the addition of styrene to each (HStLi) 2 in Figure 1. (HStLi) 2 Type of (HStLi) 2 1-a Table 2. Arrangement of the transition states shown in Figure 3 for the addition of styrene to each (HStLi)2 in Figure 1. (2) is the Li atom away from the reactive site. b Opp.: Opposite c Numbering of carbon atoms is shown below. d Distance between the two carbon atoms participating in the reaction. As described earlier, 3-c and 3-d possess lower ∆Hr and ∆Gr than other transition states. In these transition states all three side chains, i.e., those of styrene plus two HSt-groups, are located around the Li atom near the reaction site as clearly shown in 3-g and 3-h (a view from another point for 3-c and 3-d, respectively). This placement may be responsible for the low ∆Gr of these transition states. In 3-c, the three said side chains are located in the same direction to Li−Li as shown in 3-g, while in 3-d, the side chains of the two HSt-groups interact face to face with each other (3-h). The structure of 3-c may have caused the oblique positioning of the reacting HSt-group with respect to Li−Li, parallel sandwiching of the phenyl groups of styrene and the unreacting HSt-group, resulting in a low ∆Gr value. Reaction path (comparison with the reaction of non-associated polystyryllithium). The pathway of the reaction system whose transition state is 3-c is shown in Figure 4; this system will be called system(dim-r) hereafter. The reaction proceeds in three steps. First, an initial complex is formed and it goes through the first and second steps to the final step (the precursor complex, transition state, and product). In Figure 4, only the initial complex and details of the final step are shown because the complete process is complicated and we can discuss the reaction path using the information in Figure 4 (the complete reaction path is shown in Figure A1 of Appendix A2). It can be observed from Figure 4 that the values of ∆Hr were very low, as (HStLi)2 forms the initial complex, precursor complex, transition state, and product without dissociating into non-associated HStLi. The ∆Hr value corresponding to the initial complex was −19 kJ·mol −1 , indicating an exothermic phenomenon. The ∆Gr values were relatively high, which will be discussed in later sections. The distance between the two carbon atoms participating in the reaction becomes shorter as the reaction proceeds, from 0.37 nm for initial complex 4-a, through 0.24 nm for transition state 3-c, to the normal single bond distance for product 4-d (0.155 nm). Figure A1 of Appendix A2). The small drawing on the left side of each structure is the side view of the right-hand side drawing. Carbon atoms of styrene are colored in blue, while those of the reacting HSt-group are colored in brown. In each drawing the two carbon atoms participating in the reaction are connected by dotted or full line and the distance between them is shown in nm. Hydrogen atoms are not shown. ∆Hr and ∆Gr, the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . In this study, the reaction of (HStLi)2 (without the penultimate styrene unit) with styrene was used as discussed in the Methods section. Therefore, the reaction of non-associated HStLi (without the penultimate styrene unit) with styrene in the gas phase, which will be called system(mon-r) hereafter, was taken from our original paper and shown as 2 (1-c)] (system(dim-r)) in the gas phase. Although the reaction proceeds in three steps, only the first complex of the first step (4-a) (the initial complex) and the three structures of the final step (4-b, 3-c and 4-d) are shown here (the complete pathway is shown in Figure A1 of Appendix A.2). The small drawing on the left side of each structure is the side view of the right-hand side drawing. Carbon atoms of styrene are colored in blue, while those of the reacting HStgroup are colored in brown. In each drawing the two carbon atoms participating in the reaction are connected by dotted or full line and the distance between them is shown in nm. Hydrogen atoms are not shown. ∆H r and ∆G r , the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . In this study, the reaction of (HStLi) 2 (without the penultimate styrene unit) with styrene was used as discussed in the Methods section. Therefore, the reaction of non-associated HStLi (without the penultimate styrene unit) with styrene in the gas phase, which will be called system(mon-r) hereafter, was taken from our original paper and shown as Figure 5. Comparing Figure 4 (system(dim-r)) with Figure 5 (system(mon-r)), it can be noted that the ∆H r value of transition state 3-c for system(dim-r) was 22 kJ·mol −1 , which is much lower than that of 5-b for system(mon-r), 50 kJ·mol −1 : this is because (HStLi) 2 does not undergo any preliminary dissociation in system(dim-r). However, the ∆G r of 3-c was 87 kJ·mol −1 , higher than that of 5-b by 5 kJ·mol −1 . As ∆G = ∆H − T∆S (T = absolute temperature and S = entropy), this difference is attributed to the difference in the -T∆S values of system(dim-r) and system(mon-r). The rate constant of the reaction is related to ∆G (which will be discussed in detail in the next section), and the reaction of non-associated polystyryllithium (system(mon-r)) is shown to be advantageous over that of dimer polystyryllithium (system(dim-r)). Polymers 2019, 11, x FOR PEER REVIEW 10 of 18 Figure 5. Reaction pathway for St/HStLi (system(mon-r)) in the gas phase originally shown in our previous paper. In each drawing the two carbon atoms participating in the reaction are connected by dotted or full line and the distance between them is shown in nm. ∆Hr and ∆Gr, the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . The changes in the ∆Hr and ∆Gr values of system(dim-r) and (mon-r) are schematically shown in Figure 6 and 7, respectively. These figures clearly show that although ∆Hr of the transition state of system(dim-r) related to (HStLi)2 is low compared to that of system(mon-r) because of no dissociation in (HStLi)2, the ∆Gr value of the former is higher than that of the latter by 5 kJ·mol −1 , indicating that the route through the latter (system(mon-r)) is the predominant reaction path. -r)) in the gas phase originally shown in our previous paper. In each drawing the two carbon atoms participating in the reaction are connected by dotted or full line and the distance between them is shown in nm. ∆H r and ∆G r , the relative enthalpy and free energy, respectively, are expressed in kJ·mol −1 . The changes in the ∆H r and ∆G r values of system(dim-r) and (mon-r) are schematically shown in Figures 6 and 7, respectively. These figures clearly show that although ∆H r of the transition state of system(dim-r) related to (HStLi) 2 is low compared to that of system(mon-r) because of no dissociation in (HStLi) 2 , the ∆G r value of the former is higher than that of the latter by 5 kJ·mol −1 , indicating that the route through the latter (system(mon-r)) is the predominant reaction path. The changes in the ∆Hr and ∆Gr values of system(dim-r) and (mon-r) are schematically shown in Figure 6 and 7, respectively. These figures clearly show that although ∆Hr of the transition state of system(dim-r) related to (HStLi)2 is low compared to that of system(mon-r) because of no dissociation in (HStLi)2, the ∆Gr value of the former is higher than that of the latter by 5 kJ·mol −1 , indicating that the route through the latter (system(mon-r)) is the predominant reaction path. Figure 6. Enthalpy changes in the gas phase for the addition of styrene to (HStLi)2(1-c) (system(dimr)) and HStLi (system(mon-r)). Figure 6. Enthalpy changes in the gas phase for the addition of styrene to (HStLi) 2 (1-c) (system(dim-r)) and HStLi (system(mon-r)). Changes in the free energy in the gas phase for the addition of styrene to (HStLi)2(1-c) (system(dim-r)) and HStLi (system(mon-r)). Addition of Styrene to (HStLi)2 in Cyclohexane Anionic polymerization of styrene is generally performed in polar or non-polar solvents. SBR (styrene-butadiene rubber) and styrene-butadiene block copolymers have been produced in nonpolar solvents at an industrial scale. Therefore, it is important to study the propagation reaction of anionic polymerization of styrene in non-polar solvents. The transition states of St/[(HStLi)2(1-c)] in cyclohexane were optimized using the IEFPCM method of PCM; the transition state with the lowest ∆Grch, which corresponds to 3-c in the gas phase and will be called the transition state of system(dim-r) in cyclohexane hereafter, is shown in Figure 8 as 8-a, along with 8-b that shows another view of 8-a from a different aspect. Structure of 8-a is nearly the same as that of 3-c (the C−C bond lengths of 8-a are essentially the same, while the C-Li bonds Figure 7. Changes in the free energy in the gas phase for the addition of styrene to (HStLi) 2 (1-c) (system(dim-r)) and HStLi (system(mon-r)). Addition of Styrene to (HStLi) 2 in Cyclohexane Anionic polymerization of styrene is generally performed in polar or non-polar solvents. SBR (styrene-butadiene rubber) and styrene-butadiene block copolymers have been produced in non-polar solvents at an industrial scale. Therefore, it is important to study the propagation reaction of anionic polymerization of styrene in non-polar solvents. The transition states of St/[(HStLi) 2 (1-c)] in cyclohexane were optimized using the IEFPCM method of PCM; the transition state with the lowest ∆G rch , which corresponds to 3-c in the gas phase and will be called the transition state of system(dim-r) in cyclohexane hereafter, is shown in Figure 8 as 8-a, along with 8-b that shows another view of 8-a from a different aspect. Structure of 8-a is nearly the same as that of 3-c (the C−C bond lengths of 8-a are essentially the same, while the C-Li bonds are larger by a small proportion (up to 0.005 nm), and the Li−Li distance and distance between the two carbon atoms participating in the reaction are almost the same). The ∆H rch and ∆G rch values for 8-a are also shown in Table 3 together with those for St/HStLi (system(mon-r)) that was originally described in our previous paper. A tendency similar to that observed in the gas phase was observed in cyclohexane with respect to the ∆H rch and ∆G rch values. The ∆H rch value of the transition state of system(dim-r) was 28 kJ·mol −1 , which was much lower than that of system(mon-r), 51 kJ·mol −1 , owing to no preliminary dissociation of the dimeric species in system(dim-r). However, ∆G rch for system(dim-r) is 93 kJ·mol −1 , and higher than that for system(mon-r), 86 kJ·mol −1 , by 7 kJ·mol −1 . Table 3. Relative enthalpies and free energies, respectively, in the gas phase and cyclohexane for the transition states of system(dim-r) and system(mon-r). In the Gas Phase Usually, high molecular weight polymers are produced using small amounts of initiator, as can be seen in Table 4, that shows conditions of the experiments preformed to obtain the apparent activation energies for the anionic polymerization of styrene as discussed in our previous study. At an initiator concentration of 10 -3 mol·l -1 which is approximately the average concentration for these experiments, the effect of initiator concentration is [Init] 1/2 /[Init] = 33, and Rm/Rd becomes much larger. Table 3. Relative enthalpies and free energies, respectively, in the gas phase and cyclohexane for the transition states of system(dim-r) and system(mon-r). (1) The rate constant of system(mon-r) is larger than that of system(dim-r). (2) Usually, high molecular weight polymers are produced using small amounts of initiator, as can be seen in Table 4, that shows conditions of the experiments preformed to obtain the apparent activation energies for the anionic polymerization of styrene as discussed in our previous study. Item At an initiator concentration of 10 −3 mol·L −1 which is approximately the average concentration for these experiments, the effect of initiator concentration is [Init] 1/2 /[Init] = 33, and Rm/Rd becomes much larger. Table 4. Experimental conditions and results of the experiments performed to determine the apparent activation energies for the anionic polymerization of styrene (this was also discussed in our previous paper.). Thus, the advantage of the rate constant for the anionic polymerization of non-associated polystyryllithium over dimeric polystyryllithium has been proved, as shown in above (point (1), previous paragraph). Further, to obtain high molecular weight polymers which is often the desired outcome, the reactions are conducted at low catalyst concentrations; under these conditions, the difference in the rate of reaction becomes larger (point (2), previous paragraph). Solvent Some researchers, especially Fetters et al. [8][9][10][11][12][13] and Watanabe et al. [14], insist that polystyryllithium aggregates higher than dimeric aggregates coexist in the system and that the dimeric and/or higher aggregates participate in the polymerization reaction. In their investigations the existence of small amounts of higher aggregates was demonstrated using light and neutron scattering measurements; in addition, they studied the polymerization of butadiene with freeze-dried polystyryllithium. However, there is no decisive evidence for the advantages of polymerization of dimeric or higher species over that of non-associated species. Frischknecht et al. [32] reported the calculation result that star-like micelles and cylindrical micelles coexist in a polymeric system with butadienyllithium headgroups, based on the experimental results performed by Stellbrink et al. [33]. Calculations were carried out using the classical dipoles of point charges and they neglected the semi-empirical and DFT results of the binding energies shown in Figure 7 to 10 in their paper [32] due to inconsistency with the above Stellbrink et al.'s results, although these semi-empirical and DFT results agreed well with the generally accepted mechanism of the anionic polymerization of styrene [1][2][3][4][5]. Our study, including the results described in our previous study, shows that (1) Although the polymerization of styrene with non-associated polystyryllitnium requires the dissociation of dimeric polystyryllithium for the reaction, its true activation energy for the polymerization reaction is small. Therefore, the polymerization of non-associated polystyryllithium is very rapid. Especially at low catalyst concentrations where high molecular weight polymers are usually obtained, propagation reaction is very powerful. (2) Dimeric polystyryllithium can polymerize styrene. However, it is not as reactive as non-associated polystyryllithium, although its relative enthalpy is lower because there occurrs no preliminary dissociation in the dimeric species. A reconstruction of their reports, taking our results also into account, is therefore recommended. Conclusions In the case of the anionic polymerization of styrene in non-polar solvents, it is generally accepted that polystyryllitium is mainly associated into dimeric species and only a small amount of non-associated polystyryllithium species can propagate. However, the possibility of the reaction of dimeric polystyryllithium and higher aggregates was proposed by some researchers based on experimental data such as the addition of butadiene to freeze-dried polystyryllithium and the existence of higher aggregates, which was demonstrated using high-performance analytical techniques. In our previous study, the anionic polymerization of styrene with non-associated polystyryllithium in the gas phase and cyclohexane was studied using M062X/6-31+G(d), a DFT calculation method. It was shown that polystyryllithium mainly associated into dimeric species and a small amount of non-associated species reacted with styrene; its relative enthalpy of transition state in cyclohexane agreed with the apparent activation energies experimentally observed by Worsfold et al. and Ohlinger et al. Further, the most stable transition state was found to be the one with a new structure and the reason for the penultimate unit effect (slower addition of styrene to polystyryllithium with two or more styrene units when compared to that with one styrene unit) was described. In this study, the polymerization of styrene with dimeric polystyryllithium was optimized in essentially the same manner in which styrene polymerization with non-associated polystyryllithium was optimized in our previous study and the following results were obtained. The most stable transition state of St/(HStLi) 2 in cyclohexane has a structure in which the side chains of styrene and two HStLi are situated in the same direction around Li near the reactive site (structure 8-a and 8-b). Comparing this transition state with the most stable transition state for the reaction of the non-associated polystyryllithium in cyclohexane, it was found that the relative enthalpy for the reaction of the dimeric species was 28 kJ·mol −1 , which is much lower than that of non-associated polystyryllithium, 51 kJ·mol −1 ; this result is attributed to no preliminary dissociation of dimeric polystyryllithium. However, the relative free energy of the transition state for the reaction of the dimeric polystyryllithium was 93 kJ·mol −1 , higher than that of non-associated polystyryllithium by 7 kJ·mol −1 . The reaction rate for this reaction, R, is expressed as k[St][Init] 1/n , where k is a rate constant expressed as e −Grch/RT and [St] and [Init] are the concentrations of styrene and the initiator, respectively, n is 1 for the reaction involving dimeric polystyryllithium and 2 for the reaction involving non-associated polystyryllithium. Therefore, these results demonstrate the advantage of a higher rate constant for the polymerization of styrene with non-associated polystyryllithium when compared to that with dimeric polystyryllithium (k m /k d = 21). At low initiator concentrations where high molecular weight polymers are usually obtained, the effect of initiator concentration can be described using the equation [init-m] 1/2 /[init-d] = 33 at an initiator concentration of 10 −3 mol·L −1 and the difference becomes much larger (R m /R d = (k m /k d ) ([init-m] 1/2 /[init-d]) = 690). As described in the preceding section, some researchers proposed that the reaction involving dimeric and/or higher aggregates is highly reactive. However, there is no decisive evidence available on this point. In this study, it was demonstrated that dimeric polystyryllithium can react with styrene, but its reactivity is not as high as that of non-associated polystyryllithium, especially at low initiator concentrations where high molecular weight polymers are generally obtained. Table A1. Calculated electronic energies, zero-point(ZP)-corrected electronic energies, enthalpies (H) and Gibbs free energies (G) at 25 • C in the gas phase for dimeric (HStLi) 2 , intermediates and products of their reaction with styrene, and intermediates and products of the reaction of non-associated HStLi with styrene.
2019-06-13T13:06:30.929Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "56e43b3bc7fbf499cb9c0cf1d3882627c9ff1b93", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/11/6/1022/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56e43b3bc7fbf499cb9c0cf1d3882627c9ff1b93", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
14062862
pes2o/s2orc
v3-fos-license
Investigation on in vitro dissolution rate enhancement of indomethacin by using a novel carrier sucrose fatty acid ester Background and the purpose of the study The purpose of the present investigation was to characterize and evaluate solid dispersions (SD) of indomethacin by using a novel carrier sucrose fatty acid ester (SFE 1815) to increase its in vitro drug release and further formulating as a tablet. Methods Indomethacin loaded SD were prepared by solvent evaporation and melt granulation technique using SFE 1815 as carrier in 1:0.25, 1:0.5 1:0.75 and 1:1 ratios of drug and carrier. Prepared SD and tablets were subjected to in vitro dissolution studies in 900 mL of pH 7.2 phosphate buffer using apparatus I at 100 rpm. The promising SD were further formulated as tablets using suitable diluent (DCL 21, Avicel PH 102 and pregelatinised starch) to attain the drug release similar to that of SD.. The obtained dissolution data was subjected to kinetic study by fitting the data into various model independent models like zero order, first order, Higuchi, Hixon-Crowell and Peppas equations. Drug and excipient compatibility studies were confirmed by fourier transform infrared spectroscopy, X-ray diffraction, differential scanning calorimetry and scanning electron microscopy. Results The in vitro dissolution data exhibited superior release from formulation S6 with 1:0.5 drug and carrier ratio using solvent evaporation technique than other SDs prepared at different ratio using solvent evaporation and melt granulation technique. The in vitro drug release was also superior to that of the physical mixtures prepared at same ratio and also superior to SD prepared using common carriers like polyvinyl pyrollidone and PEG 4000 by solvent evaporation technique. Tablets (T8) prepared with DCL21 as diluent exhibited superior release than the other tablets. The tablet formulation (T8) followed first order release with Non-Fickian release. Conclusion SFE 1815 a novel third generation carrier can be used for the preparation of SD for the enhancement of in vitro drug release of indomethacin an insoluble drug belonging to BCS class II. Introduction The therapeutic efficacy of a drug product intended to be administered by the oral route depends upon its absorption in the gastro-intestinal tract to end with bioavailability. It is well established that dissolution is recurrently the rate-limiting step in the gastrointestinal absorption of a drug from a solid dosage form which belongs to BCS class II (low soluble and high permeable). The drug release from poorly soluble drugs has been shown to be unpredictable and still remains a problem to the pharmaceutical industry [1]. Several methods that have been employed to improve the solubility of poorly water soluble drugs include increasing the particle surface area available for dissolution by milling [2], improving the wettability with surfactants or doped crystals [3], decreasing crystallinity by preparing a solid dispersion [4], use of inclusion compounds such as cyclodextrin derivatives [5], use of polymorphic forms or solvated compounds [6] and use of salt forms. There are several advantages and disadvantages for the above given methods. Solid dispersions (SD) represent an ideal pharmaceutical technique for increasing the dissolution, absorption and therapeutic efficacy of drugs with poor aqueous solubility. The term "solid dispersion" refers to the dispersion of one or more active ingredients in an inert carrier or matrix in the solid state prepared by melting, solvent, or melting solvent methods [7] which has been used by various researchers who have reported encouraging results with different drugs [8]. The method of preparation and the type of the carrier used are important in influencing the properties of such solid dispersions [9]. Among the carriers used in the formation of solid dispersions, polyethylene glycol and polyvinyl pyrrolidone are the most commonly used. The first generation (urea) and second generation (PEG, polyvinyl pyrrolidone, HPMC, hydroxylpropyl cellulose, starch derivatives, cyclodextrins) of carriers have many disadvantages when compared to the use of third generation carriers (poloxamer, gelucire, sucrose fatty acid esters) which are non-ionic and led to development of superior solid dispersions. Sucrose fatty acid esters (SFE) are nonionic surface active agents which are mono-, di-, and tri-esters of sucrose with fatty acids, manufactured from purified sugar or hydrogenated edible tallow or edible vegetable oils. These consist of sucrose residues as the hydrophilic group or polar head and fatty acid residues as the lipophilic group or non-polar head with a unique emulsification property that tolerates any temperature variation [10,11]. SFE are currently regulated by the U.S. Food and Drug Administration (FDA) as food additives under chapter 21, section 172.859 of the Federal Code of Regulations (CFR). However, studies of SFE in the area of commonly used tablet formulations are limited and emphasize using specific types of SFE for particular approaches. These are non-toxic and biodegradable, as they can be enzymatically hydrolyzed to sucrose and fatty acids prior to intestinal absorption or excreted in faeces, depending on the degree of esterfication with a wide range of HLB values 1 -16 [12,13]. The present investigation was focused on exploring sucrose fatty acid ester as a drug carrier to increase the drug solubility and the dissolution rate of indomethacin by formation of solid dispersions using various methods. The dissolution characteristics and physicochemical modification of the indomethacin-SFE solid dispersions were investigated by in vitro dissolution test, FTIR, XRD, SEM and thermal analysis (DSC). These solid dispersions were formulated into tablets after optimizing with suitable diluent used in the study and were evaluated for physicochemical characterization. Materials Indomethacin was a gift sample from Macleods pharmaceuticals Ltd. India. Sucrose fatty acid ester 1815 was obtained from Mitsubishi-Kagaku Foods Corporation, Japan. Avicel PH 102 was obtained from FMC Biopolymer. DCL 21 was purchased from Zeel Pharmaceuticals, India. All other chemicals were of reagent grade and used as received. Composition of solid dispersions Solid dispersions contained of 1:0.25, 1:0.5, 1:0.75 and 1:1 of indomethacin and SFE 1815 prepared by melt granulation and solvent evaporation methods. Physical mixtures were prepared only for the promising ratio for comparision. Table 1 lists the solid dispersions prepared along with the method employed for preparation, composition and codes. Preparation of solid dispersions Melt granulation method Accurately weighed amounts of carrier were placed in an aluminum pan on a hot plate and melted, with constant stirring, at a temperature of about 50°C. An accurately weighed amount of indomethacin was incorporated into the melted carrier with stirring to ensure homogeneity. The mixture was heated until a clear homogeneous melt was obtained. The pan was then removed from the hot plate and allowed to cool at room temperature and the obtained damp mass is passed through sieve no #40. The granules obtained were transferred to a polybag and stored in desiccator for further studies. Solvent evaporation method Accurately weighed amounts of indomethacin and carrier (SFE 1815) were dissolved in minimum quantities of methanol in a china dish. The solution was stirred till slurry was formed. The solvent was evaporated under reduced pressure at 40°C, and the resulting residue was dried under vacuum for 3 h, stored in a desiccator at least overnight, ground in a mortar, and passed through mesh no #40. Physical mixtures Physical mixtures were obtained by pulverizing accurately weighed amounts of drug and polymer in a glass mortar and carefully mixed until a homogeneous mixture was obtained. Drug and carrier ratio of 1:0.5 were prepared and subsequently stored at room temperature in desiccator. Preparation of tablets SD powder, diluents, disintegrant and binder were weighed as per formulae given in Table 2, these were then passed through sieve no # 40, transferred to a poly bag and blended for 5 min. To this homogeneous blend, magnesium stearate presifted through # 60 was added and blended for 2 min. The resulting blend was compressed on Cadmach 16 station compression machine under a common compression force of 2-3 Kg/cm 2 , using 6 mm, round, flat faced punches. In vitro dissolution studies Powder equivalent to indomethacin 25 mg for SD and tablets were introduced into dissolution medium. The dissolution medium is 900 mL of phosphate buffer pH 7.2, rotational speed of the basket was set at 100 rpm at 37 ± 0.5°C. Aliquots (5 ml each) were withdrawn at predetermined time intervals by means of a syringe fitted with a 0.45 μm pre-filter and immediately replaced with 5 mL of fresh medium maintained at 37 ± 0.5°C. The samples were analyzed for indomethacin using U.V. double beam Elico SL 210 model at 318 nm. For comparison, dissolution studies of pure indomethacin and INDOCAP marketed capsules along with PM and SD prepared with polyvinyl pyrollidone (PVP) and PEG 4000 at drug and polymer ratio of 1:0.5 employing solvent evaporation technique were also performed. All the dissolution experiments were carried out in triplicate. Comparison of dissolution profiles was done to quantify the difference in rate and extent of drug release as influenced by the formulation and process variables in order to find out the mode of drug release and their kinetics. Release kinetics As a model-dependent approach, the dissolution data was fitted to five popular release models such as zeroorder, first-order, Higuchi [14], Hixon-Crowel [15] and Korsmeyer -peppas equations. The order of drug release from matrix systems was described by using zero order kinetics or first orders kinetics. The mechanism of drug release from matrix systems was studied by using Higuchi and Hixon-Crowel equation. Model with the highest coefficient correlation (r) was judged to be a more appropriate model for the dissolution data. According to Korsmeyer-Peppas equation, the release exponent n value is used to characterize different release mechanisms. If the n value is 0.5, the release mechanism follows Fickian diffusion. If n value is >0.45 or <0.89, the mechanism follows non-Fickian (anomalous) diffusion and when n = 0.89 it will be non-Fickian case II and if n > 0.89 it will be non-Fickian super case II transport [16]. The equations for different models are represented in Table 3. Fourier transform infrared spectroscopy FTIR spectra can be used to detect drug excipient interactions by following the shift in vibrational or stretching bands of key functional groups. FTIR spectra were obtained by using Alpha FTIR spectrophotometer (Bruker Optik GmbH, Germany). All the spectra were analyzed using OPUS 6.5 software. Samples were prepared by KBr pellet method, which had been prepared by gently mixing 1 mg of the sample with 200 mg of KBr. The spectra were scanned over a wave number range of 4000 -500 cm −1 . Model Equation Zero-order diffraction patterns were recorded on X-Ray diffractometer (PW 1729, Philips, Netherlands). XRD patterns were recorded using monochromatic Cu Kα radiation with Nitrogen filter at a voltage of 40 keV and a current of 40 mA. The sample was analyzed over 2θ range of 5-30°a nd the data was processed with Diffrac Plus V1.01 software. Differential scanning calorimetry DSC is a frequently used thermo analytical technique that generates data on melting endotherms and glass transitions. DSC was performed utilizing Mettler DSC 821 (Mettler-Toledo, Switzerland). Samples of 3-4 mg were encapsulated and hermetically sealed in flat bottomed aluminum pan with crimped on lid. Samples were allowed to equilibrate for 1 min and then heated in a nitrogen atmosphere over a temperature range from 25°C to 240°C with a heating rate of 5°C/min. An empty aluminum pan is served as reference. Nitrogen was used as a purge gas, at the flow rate of 20 mL/min for all the studies. Reproducibility was checked by running the sample in triplicate. Thermograms were obtained by the STAR e SW 9.10 software and reported. Scanning electron microscopy SEM has been employed to study the morphology of the samples. The samples were mounted on the SEM sample stab, using a double sided sticking tape and coated with gold (200 A°) under reduced pressure (0.001 torr) for 5 min using an ion sputtering device (Jeol JFC-1100 E, Japan). The gold coated samples were observed under the SEM (JEOL JSM-840A, Japan) and photomicrographs of suitable magnifications were obtained with the aid of a software system (LINK ISIS , Oxford, UK). Results and discussion In vitro dissolution studies Solid dispersions Gradual increase in drug release of the prepared SD was observed with increase in concentration of the polymer up to an extent, further increase in concentration did not increase the drug release. Maximum drug release was obtained at the end of 30 min for S 6 by solvent evaporation technique using SFE 1815 polymer in 1:0.5 drug and polymer ratio. Whereas using the same polymer but employing melt granulation technique gave less drug release. The enhanced drug releases from SD prepared with solvent evaporation technique in this study are in co-relation with the previous study conducted by Patel et al. on SD prepared using PEG 6000 and PVP using solvent evaporation and melting methods [17]. Only 35.10%, 63.90%, 68.78% and 73.54% drug release was observed form pure drug, S 9 (PM), SD using PVP (S 10 ) and PEG 4000 (S 11 ) respectively in 30 min, whereas 99.77% drug was released from S 6 which is shown in Figure 1. Earlier workers also tried out to enhance the dissolution of indomethacin by SD technique, El-Badry et al. prepared SD using PEG4000 and Gelucire 50/13 using hot melting method, results have shown that more amount of carrier was required and also 90 min was required for complete release of drug [18]. The capability to enhance or increase the drug release and bioavailability of the insoluble drug by SFE 1815 depends upon the common factors like excellent wettability, which could be observed clearly from the solid dispersion since it rapidly left the surface and was dispersed in the bulk of the dissolution medium which markedly increased indomethacin solubility and also specific features like (i) HLB valuehigher the HLB value greater is the ability to enhance (ii) length of fatty acid chain -shorter fatty acid increases the release more than the longer fatty acids [19] (iii) number of carbon atoms in the fatty acid chain [19] (v) proportion of monoesters -higher the proportion of monoesters higher is the hydrophilicity of the surfactant [12,20,21]. The main reason for better drug release of SD using SFE 1815 is the HLB value which is 15 and the number of monoesters (70%) in the ester composition of the carrier. The comparative dissolution profiles of S 6 with SD prepared with other carriers, PM prepared with same ratio as S 6 , pure drug and marketed capsule are given in Figure 2. It is well known from the literature and practical knowledge that the SD are unstable as such, but stable when formulated as a tablet dosage form. The best SD (S 6 ) prepared by solvent evaporation technique in 1:0.5 of drug: SFE1815 ratio was selected for the development of tablets, which gave a superior and enhanced release profile than SD prepared by melt granulation using same carrier, SD prepared by common carriers by solvent evaporation technique and PM. Tablets were developed using different diluents (pregelatinised starch, Avicel PH102 and DLC 21) in different concentrations (25%, 55% and 75%). More than 99% of the drug was released from all the formulations. T 8 formulation with 55% w/w of DCL21 gave maximum drug release in 30 min. The initial lag in drug release when compared with SD was due to the disintegration time required for the tablet and the profiles are shown in Figure 3. Even though pregelatinised starch is more soluble than Avicel PH102 the percent of drug released from the tablets was less when compared to Avicel PH102. The possible reason for that release could be the swelling nature of pregelatinsed starch [22]. As the concentration of the pregelatinized starch was increased, the percent of drug release decreased, which is one of the reason for its use in the development of sustained release tablets [23]. Tablets prepared with DCL 21 as diluent gave better release than those prepared with Avicel PH 102 due to the more hydrophilic nature and solubility of DCL 21. Release kinetics The drug release of indomethacin from the formulations T 2 and T 3 followed zero order kinetics which was indicated by higher 'r' values of zero order release model. T 1 , T 4 , T 5 , T 6 , T 7 , T 8 , and T 9 followed first order release model which was indicated by the higher 'r' value. The relative contributions of drug diffusion and erosion to drug release were further confirmed by subjecting the dissolution data to Higuchi model and Hixon Crowell model. It was found that T 2 and T 3 followed zero order kinetics with Non-Fickian diffusion mechanism. T 1 , T 4 , T 5 , T 7 , T 8 and T 9 followed first order with Non-Fickian diffusion mechanism. T 6 formulation followed first order release with erosion mechanism as the 'r' value obtained is greater for Hixon Crowell mechanism. The promising tablet formulation T 8 followed first order release with Non-Fickian diffusion mechanism. Results of various order plots for the tablets are shown in Table 4. Fourier transform infrared spectroscopy Pure indomethacin spectra showed characteristic peaks at 3020 cm -1 (aromatic C-H stretching), 2965 cm -1 (C-H stretching vibrations), 1761 cm -1 (C = O stretching vibrations), 1261 cm -1 (asymmetric aromatic O-C stretching), 1086 cm −1 (symmetric aromatic O-H stretching). SD (S 6 ) and tablet formulation (T 8 ) also exhibited the characteristic peaks of indomethacin with no additional peaks observed in the spectra, indicating retention of chemical identity of indomethacin as shown in Figure 4. However, intensity of peaks corresponding to the drug was reduced or broadened in the SD and tablet formulations, possibly due to the mixing with the surfactant and addition of other excipients. The FTIR spectra data confirmed that SFE 1815 did not alter the performance characteristics indicating their compatibility of the drug. X-ray diffraction The X-ray diffractograms of pure drug indomethacin and promising formulations are shown in Figure 5. The diffractogram of indomethacin showed characteristic sharp intensity diffraction peaks at 2θ values of 11.51°, 12.76°, 16.62°, 19.54°, 21.84°, 22.78°, 26.64°, 27.47°and 29.35°, which reflected the crystalline nature of drug. Both the formulations (S 6 and T 8 ), showed diffraction peaks at respective 2θ values of pure indomethacin although their relative intensities were reduced or there was slight shift in their peaks, suggesting reduced degree of crystallinity of drug in these formulations. Differential scanning calorimetry The DSC thermogram of pure indomethacin exhibited a sharp endothermic peak at 164°C corresponding to its melting point, indicating its crystalline nature. SFE 1815 showed endothermic melting peak at 54.2°C. There is a shift in the melting peak of indomethacin in SD (S 6 ) and tablet (T 8 ) to 158.2°C and 158.4°C respectively as indicated in Figure 6. The shift observed in the melting peak of indomethacin in the formulations may be due to physical interaction between the drug and excipient. Compared to pure drug the melting peak was broadened to some extent in the formulations which may be due to changes in its crystalline form. been adsorbed on to the drug during the preparation of SD. The appearance of the solid dispersion was homogenous, with partial loss of drug crystallinity and reduction in particle size, which may be reason for faster dissolution of the drug and which was further confirmed by DSC and XRD studies. Submit your manuscript at www.biomedcentral.com/submit
2017-06-24T03:28:19.449Z
2012-07-19T00:00:00.000
{ "year": 2012, "sha1": "35d0a498cbf939329d9956c6258c821f2fa023e3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40199-019-00294-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "35d0a498cbf939329d9956c6258c821f2fa023e3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
237280358
pes2o/s2orc
v3-fos-license
The Integrity of the HMR complex is necessary for centromeric binding and reproductive isolation in Drosophila Postzygotic isolation by genomic conflict is a major cause for the formation of species. Despite its importance, the molecular mechanisms that result in the lethality of interspecies hybrids are still largely unclear. The genus Drosophila, which contains over 1600 different species, is one of the best characterized model systems to study these questions. We showed in the past that the expression levels of the two hybrid incompatibility factors Hmr and Lhr diverged in the two closely related Drosophila species, D. melanogaster and D. simulans, resulting in an increased level of both proteins in interspecies hybrids. The overexpression of the two proteins also leads to mitotic defects, a misregulation in the expression of transposable elements and decreased fertility in pure species. In this work, we describe a distinct six subunit protein complex containing HMR and LHR and analyse the effect of Hmr mutations on complex integrity and function. Our experiments suggest that HMR needs to bring together components of centromeric and pericentromeric chromatin to fulfil its physiological function and to cause hybrid male lethality. Introduction Eukaryotic genomes are constantly challenged by the integration of viral DNA or the amplification of transposable elements. As these challenges are often detrimental to the fitness of the organism, they frequently elicit adaptive compensatory changes in the genome. As a result of this process, the genomes as well as the coevolving compensatory factors, can rapidly diverge. Such divergences can result in severe incompatibilities eventually leading to the formation of two separate species [1,2]. Arguably the best characterized system for studying the genetics of reproductive isolation and hybrid incompatibilities is constituted by the two closely related Drosophila species D. melanogaster and D. simulans (D. mel and D. sim) [3]. One century of genetic studies has led to the identification of three fast evolving genes that are critical for hybrid incompatibility: Hmr (Hybrid male rescue), Lhr (Lethal hybrid rescue) and gfzf (GST-containing FLYWCH zinc finger protein) [4][5][6][7][8]. The genetic interaction of these three genes results in the lethality of D.mel/ D.sim hybrid males. Strikingly, all three genes are fast evolving and code for chromatin proteins suggesting that their fast evolution reflects adaptations to genomic alterations. While the molecular interaction between HMR and LHR is well established in pure species as well as in hybrids [9,10], the molecular basis for their genetic interaction with GFZF is unclear. Interestingly, in interspecies hybrids or when HMR/LHR are overexpressed, HMR spreads to multiple novel binding sites many of which have been previously characterized to also bind GFZF [11]. In the nucleus of tissue culture cells and in imaginal discs, HMR and LHR form defined foci that are clustered around centromeres [9,12,13]. Super-resolution microscopy and chromatin immunoprecipitation revealed that HMR is often found at the border between centromeres and constitutive pericentromeric heterochromatin bound by HP1a [12,14,15]. In addition to pericentromeric regions, HMR also binds along chromosome arms colocalizing with known gypsy-like insulator elements [14]. Depending on the tissue investigated, HMR shows slightly different binding patterns. It binds to telomeric regions of polytene chromosomes [9,11], colocalizes with HP1a in early Drosophila embryos [10] and near DAPI-bright heterochromatin in larval brain cells [13]. Flies carrying Hmr or Lhr loss of function alleles show an upregulation of transposable elements (TEs), defects in mitosis, and a reduction of female fertility in D. mel. Expression of transposable elements is increased particularly in ovarian tissue but also in cultured cells [9,10]. The mechanism that causes such a massive and widespread upregulation is not entirely clear as most of the TEs that respond to a reduced Hmr dosage are not bound by HMR under native conditions [14]. Due to the overexpression of the HeT-A, TART and TAHRE retrotransposons, Hmr mutants show a substantial increase in telomere length [10] and an increased number of anaphase bridges during mitosis presumably due to a failure of sister chromatid detachment during anaphase [13]. The massive upregulation of transposable elements in ovaries is possibly also the cause of the substantially reduced fertility of Hmr and Lhr mutant female flies [16]. Many of the phenotypes observed in cell lines lacking Hmr and Lhr, are mirrored by Hmr and Lhr overexpression, highlighting the importance of properly balanced Hmr/Lhr levels [9]. Hybrids show enhanced levels of both proteins relative to the pure species and consistently, are also characterized by loss of transposable elements silencing and cell cycle progression [9,10,17]. The latter is thought to be the cause for the failure of male hybrids to develop into adults, given the almost complete absence of imaginal discs [13,[18][19][20]. In addition, hybrids and Hmr/Lhr overexpressing cells, display a widespread mis-localization of HMR at several euchromatic loci at chromosome arms including the previously unbound GFZF binding sites [9,11,12]. To better understand the deleterious effects observed in the presence of an excess of HMR, we decided to investigate the binding partners of HMR under native conditions and upon overexpression. Our results suggest that HMR belongs to a defined protein complex composed of 6 subunits under native conditions and gains novel chromatin associated interactors when overexpressed. Moreover, we show that Hmr mutations that interfere with complex formation lead to the loss of HMR's function in pure species and in hybrids. Cloning Cloning of Hmr and Lhr ORFs into the pMT-FLAG-HA plasmid was described in [9]. Restriction fragments containing Hmr and Lhr ORFs were sub-cloned into pFast Bac Dual (containing an N-terminal HA-tag) and pFast Bac HTb, respectively. The resulting plasmids were transformed into DH10Bac and recombined bacmids were isolated from clonal transformants. PCR verified recombinant bacmids were used for transfection of SF21 cells. Boh1 and Boh2 were PCR amplified from genomic DNA, cloned into pJet1.2 and verified by sequencing. ORFs were then sub-cloned into the pMT-FLAG-HA expression vector described in [9]. Full cloning details, plasmids sequences and plasmids are available on request. Nuclear extraction for immunoprecipitation Cells were harvested, centrifuged at 1200 x g and washed with cold PBS. Cell pellets were resuspended in hypotonic buffer (10 mM Hepes pH 7.6, 15 mM NaCl, 2 mM MgCl 2 , 0.1 mM EDTA, cocktail of protease inhibitors + 0.25 μg/mL MG132, 0.2 mM PMSF, 1 mM DTT) and incubated on ice for 20 min. Cells were incubated for another 5 min after addition of NP40 to a final concentration of 0.1% and then dounced with 20 strokes. 10% Hypertonic buffer (50 mM Hepes pH 7.6, 1 M NaCl, 30 mM MgCl 2 , 0.1 mM EDTA) was added to rescue isotonic conditions. Nuclei were centrifuged for 10 minutes at 1500 x g and the supernatant was discarded. Nuclei were washed once in isotonic buffer (25 mM Hepes pH 7.6, 150 mM NaCl, 12.5 mM MgCl 2 , 1 mM EGTA, 10% glycerol, cocktail of protease inhibitors + 0.25 μg/mL MG132, 0.2 mM PMSF, 1 mM DTT). After resuspension in the same buffer, nuclei were treated with benzonase (MERCK 1.01654.0001) and incubated for 30 minutes at 4˚C on a rotating wheel. Soluble proteins were extracted by increasing the NaCl to 450 mM and incubated for 1 h at 4˚C on a rotating wheel. Finally, the soluble material was separated from the insoluble chromatin pellet material by centrifugation for 30 min at 20000 x g and used for immunoprecipitations. Immunoprecipitation Anti-FLAG immunoprecipitation was performed using 20 μL of packed agarose-conjugated mouse anti-FLAG antibody (M2 Affinity gel, A2220 Sigma-Aldrich) and were targeted either against the exogenously expressed transgenes (HMR wt-tg , HMR dC-tg , HMR 2-tg ) or an endogenously FLAG-tagged HMR (HMR). The other IPs were performed by coupling the specific antibodies to 30 μL of Protein A/G Sepharose beads. Each bait was targeted with at least one antibody (rat anti-LHR 12F4, mouse anti-HP1a C1A9, rabbit anti-NLP, anti-FLAG-M2 for FLAG-BOH1 and FLAG-BOH2), while HMR was targeted with three different antibodies (rat anti-HMR 2C10 and 12F1, anti-FLAG-M2 for FLAG-HMR). Rabbit anti-NLP and mouse anti-HP1a were directly incubated with the beads, while rat anti-HMR and anti-LHR were incubated with beads that were pre-coupled with 12 μL of a rabbit anti-rat bridging antibody (Dianova, 312-005-046). FLAG-IPs in non-FLAG containing nuclear extracts were used as mock controls for FLAG-IPs. For all other IPs, unspecific IgG coupled to Protein A/G Sepharose or Protein A/G Sepharose alone were used as mock controls. The steps that follow were the same for all the immunoprecipitations and were all performed at 4˚C. Antibody coupled beads were washed three times with IP buffer (25mM Hepes pH 7.6, 150 mM NaCl, 12.5 mM MgCl 2 , 10% Glycerol, 0.5 mM EGTA) prior to immunoprecipitation. Thawed nuclear extracts were centrifuged for 10 minutes at 20000 x g to remove precipitates and subsequently incubated with antibody-coupled beads in a total volume of 500-600 μL IP buffer complemented with a cocktail of protease inhibitors plus 0.25 μg/mL MG132, 0.2 mM PMSF, 1 mM DTT and end-over-end rotated for 2 h (anti-FLAG) or 4 h (other IPs) at 4˚C. After incubation, the beads were centrifuged at 400 x g and washed 3 times in IP buffer complemented with inhibitors and 3 times with 50 mM NH 4 HCO 3 before on beads digestion. Sample preparation for mass spectrometry The pulled-down material was released from the beads by digesting for 30 minutes on a shaker (1400 rpm) at 25˚C with trypsin at a concentration of 10 ng/μL in 100 μL of digestion buffer (1M Urea, 50 mM NH 4 HCO 3 ). After centrifugation the peptide-containing supernatant was transferred to a new tube and two additional washes of the beads were performed with 50 μL of 50 mM NH 4 HCO 3 to improve recovery. 100 mM DTT was added to the solution to reduce disulphide bonds and the samples were further digested overnight at 25˚C while shaking at 500 rpm. The free sulfhydryl groups were then alkylated by adding iodoacetamide (12 mg/mL) and incubating 30 minutes in the dark at 25˚C. Finally, the light-shield was removed and the samples were treated with 100 mM DTT and incubated for 10 minutes at 25˚C. The digested peptide solution was then brought to a pH~2 by adding 4 μL of trifluoroacetic acid (TFA) and stored at -20˚C until desalting. Desalting was done by binding to C18 stage tips and eluting with elution solution (30% methanol, 40% acetonitrile, 0.1% formic acid). The peptide mixtures were dried and resuspended in 20 μL of formic acid 0.1% before injection. Sample analysis by mass spectrometry Peptide mixtures (5 μL) were subjected to nanoRP-LC-MS/MS analysis on an Ultimate 3000 nano chromatography system coupled to a QExactive HF mass spectrometer (both Thermo Fisher Scientific). The samples were directly injected in 0.1% formic acid into the separating column (150 x 0.075 mm, in house packed with ReprosilAQ-C18, Dr. Maisch GmbH, 2.4 μm) at a flow rate of 300 nL/min. The peptides were separated by a linear gradient from 3% ACN to 40% ACN in 50 min. The outlet of the column served as electrospray ionization emitter to transfer the peptide ions directly into the mass spectrometer. The QExactive HF was operated in a Top10 duty cycle, detecting intact peptide ion in positive ion mode in the initial survey scan at 60,000 resolution and selecting up to 10 precursors per cycle for individual fragmentation analysis. Therefore, precursor ions with charge state between 2 and 5 were isolated in a 2 Da window and subjected to higher-energy collisional fragmentation in the HCD-Trap. After MS/MS acquisition precursors were excluded from MS/MS analysis for 20 seconds to reduce data redundancy. Siloxane signals were used for internal calibration of mass spectra. Proteomics data analysis For protein identification, the raw data were analyzed with the Andromeda algorithm of the MaxQuant package (v1.6.7.0) against the Flybase reference database (dmel-all-translation-r6.12.fasta) including reverse sequences and contaminants. Default settings were used except for: Variable modifications = Oxidation (M); Unique and razor, Min. peptides = 1; Match between windows = 0.8 min. Downstream analysis on the output proteinGroups.txt file were performed in R (v4.0.1). If not otherwise stated, plots were generated with ggplot2 package (v3.3.2). Data were filtered for Reverse, Potential.contaminant and Only.identified.by.site and iBAQ values were log 2 transformed and imputed using the R package DEP (v1.10.0, impute function with following settings: fun = "man", shift = 1.8, scale = 0.3). Except for Figs where data were bait normalized, median normalization was performed. Statistical tests were performed by fitting a linear model and applying empirical Bayes moderation using the limma package (v3.44.3). AP-MS for HMR complex identification (Figs 1 and S1) were compared with a pool of all control samples (IgG and FLAG mock IPs). For Figs 1C and S1E enriched proteins from AP-MS experiments from HMR complex components were first selected (cut off: log2FC > 2.5, p-adjusted < 0.05) and then intersection was quantified and plotted with UpsetR (v1.4.0). The Network graph in S1E Fig was prepared with force directed layout in D3. js and R. The network graph was prepared using Cytoscape (3.4.0) with input from all AP-MS experiments and the String database (vs11.0) [22] Western blot analysis Samples were boiled 10 min at 96˚C in Laemmli sample buffer, separated on SDS-PAGE gels, processed for western blot using standard protocols and detected using rat anti-HMR 2C10 ChIP-Seq Chromatin immunoprecipitation was essentially performed as in [14]. For each anti-FLAG ChIP reaction, chromatin isolated from 1-2 x 10 6 cells were incubated with 5 μg of mouse Subsequently, the intersection among such enriched interactors was calculated and plotted with the UpsetR package. The intersection plot shows the number of interactors (bars) that are unique to one or more of the subsets representing the interactomes resulting from different IPs (rows). Lines connected dots define specific intersections between two or more interactomes. Unlabelled additional bait-specific interactors from (A) and (B) are available in S2 Table or in interactive plots at (URL). Additional volcano plots from the HMR complex components AP-MS are shown in S1 anti-FLAG (F1804, SIGMA-ALDRICH-RRID: AB262044) antibody pre-coupled to Protein A/G Sepharose. For ChIPs targeting total HMR, the same amount of chromatin was incubated with rat anti-HMR 2C10 antibody pre-coupled to Protein A/G Sepharose through a rabbit IgG anti-rat (Dianova, 312-005-046). Samples were sequenced (single-end, 50 bp) with the Illumina HiSeq2000. Sequencing reads were mapped to the Drosophila genome (version dm6) using bowtie2 (version 2.2.9) and filtered by mapping quality (-q 2) using samtools (version 1.3.1). Sequencing depth and input normalized coverages were generated by Homer (version 4.9). Enriched peaks were identified by Homer with the parameters -style factor -F 2 -size 200 for each replicate. High confidence FLAG-HMR peaks (a pool of HMR wt-tg and HMR dC-tg ) were called when a peak was present in at least half of the samples (5 out of 10). Coverages were centred at high confidence FLAG-HMR peaks in 4 kb windows and binned in 10 bp windows. The as-such generated matrices were z-score normalized by the global mean and standard deviation. HP1a-proximal peaks were defined as 10 percent of the peaks with highest average HP1a ChIP signal in 4 kb windows surrounding peaks. Composite plots and heatmaps indicate the average ChIP signal (z-score) across replicates. Heatmaps were grouped by HP1a class and sorted by the average ChIP signal in HMR native in a 400 bp central window. For statistical analysis, the average ChIP signal (z-score) was calculated in a 200 bp central window across peaks for each replicate. P-values were obtained by a linear mixed effect model (R packages: lme4 version 1.1-23 and lmerTest version3.1-2), in which average ChIP signal was included as outcome, genotype (Hmr + or Hmr dC ) and peak class (HP1a-proximal or non-HP1a-proximal) as fixed effects and sample ids as random intercept. Chromosome-wide coverage plots were generated by averaging replicates, binning coverages in 50 kb windows and z-score normalizing by the global mean and standard deviation. The percentage of peaks on chromosome 4 relative to the total number of peaks was calculated for each replicate. P-value was obtained by a linear model (R package: stats version 3.6.1), in which percentage was included as outcome and genotype (Hmr + or Hmr dC ) as independent variable. Immunofluorescent staining in SL2 cells Immunofluorescent staining of SL2 cells was performed as described previously [9,12] Sum intensity projections were analyzed, and only cells with a minimum nucleoplasmic intensity of 70 a.u. on the anti-HMR channel were taken into account for further analysis. Two different quantifications were performed (respectively Figs 4A and S6). In one case cells were separated and counted based on the degree of colocalization between HMR and CENP-C: overlapping, partially overlapping or non-overlapping. In parallel, the number of CENP-C marked centromeric foci associated with HMR signal was measured. Both cells and centromeric foci were blind-counted, the experiment was repeated in 2 biological replicates and for each replicate at least two slides were measured (for each slide between 24 and 63 cells were quantified). Further details about stainings for S2 Fig and microscopy are available as S1 Methods. Immunofluorescent staining in ovaries Flies were grown at 25˚C for 7-9 days and fed in yeast paste for at least 3 days prior to dissection. Ovaries were dissected in ice-cold PBS, then ovarioles were teased apart with forceps and moved to 1.5 mL tubes. PBS was removed and fixation solution (400 μL PBS Paraformaldehyde 2%, Triton 0.5%, 600 μL Heptane) was added. Samples were incubated for 15 min at room temperature on a rotating wheel. Fixed ovaries were washed 3 times with 200 μL of PBS-T and then blocked for 30 min at room temperature on a rotating wheel (PBS-T, NDS 2%). After rinsing, 100 μL primary antibody solution (PBS-T, rat anti-HMR-2C10 1:20 mouse anti-HP1a-C1A9 1:10 and NDS 2%) were added and samples were incubated rotating overnight at 6˚C. Primary antibody was washed three times with PBS-T. Secondary antibody solution was added (200 μL PBS-T + donkey anti-mouse Alexa 488 1:600, donkey anti-rat Cy3 1:300 and 2% NDS) and samples were rotated for 2 h at room temperature. Samples were washed three times with PBS-T and incubated for 10 min at room temperature with 200 μL of DAPI 0.002 mg/mL. Following this, they were washed once with PBS-T and once with PBS. Stained ovaries were finally mounted with one drop of vectashield in epoxy diagnostic slides (Thermo-Fisher Scientific, 3 wells 14 mm) and covered with high precision cover glasses. Further details about stainings for Figs 5 and S7 and microscopy are available as S1 Methods. Drosophila husbandry and stocks Drosophila stocks were reared on standard yeast glucose medium and raised at 25˚C on a 12h/ 12h day/night cycle. For the transgenic fly lines the entire D.mel genomic region including the melanogaster-Hmr gene and parts of the flanking CG2124 and Rab9D genes (a 9538 bp fragment: X10,481,572-10,491,109), was cloned into a plasmid backbone containing a mini-white gene and a p-attB site. Plasmids for control Hmr + stocks contained a wild type copy of Hmr while plasmids for test stocks contained either of the mutated versions Hmr dC or Hmr 2 . Hmr dC plasmids carry a point mutation with an A-T substitution (base 3667 of the CDS) that turns a Gly into a premature STOP codon and results in a C-terminally truncated protein product (last 171 aa missing). Hmr 2 plasmids carry the two point mutations E371K and G527A (described in [16,23]). Identity of constructs was confirmed by sequencing. PhiC31 integrasemediated transformations of the D. melanogaster line y1 w67c23; P{CaryP}attP2 (BL8622) were performed by BestGene Inc. resulting in the transgenic integration in attP2 docking site in chromosome 3 (3L:11070538). All rescue experiments were performed by crossing transgenes into Df(1)Hmr-, ywv background [16] Crosses for generating Hmr genotypes for complementation tests in D. melanogaster Crosses for generating Hmr genotypes for hybrid viability assays Young D.mel females Df(1)Hmr-; Hmr � were crossed at 25˚C to 1-5 day old wild type D.sim males (C167.4 or w 501 ). Control D.mel stocks Df(1)Hmr-were crossed in parallel to D.sim males (control cross: no lethality rescue). Crosses were transferred regularly to fresh medium. When larval tracks became visible, vials were transferred at 20˚C to improve recovery of interspecific hybrids. Vials were kept until the last adults eclosed and number and genotype of hybrid offspring was scored. Rescue was measured by counting the number of viable transgene carrying males from the corresponding cross. In the rescue experiment, Hmr + served as a positive (lethality rescue) and Hmr 2 as a negative control (no lethality rescue). For statistical testing, Wilcoxon rank sum test (non-parametric) was used for pairwise comparisons with FDR correction for multiple testing using ggpubr package (v0.4.0, using compare_means function with following settings: formula = percent_males_offspring~Hmr_allele, method = 'wilcox.test', p.adjust.method = 'fdr'). Fertility assays Three 1-3 days old D. melanogaster females were crossed for 2-3 days with six wild type males D. melanogaster. Flies were then transferred to fresh vials and again every 5 days for 3 times in total. Vials were scored 15-18 days after first eggs were laid, to make sure all adults were eclosed but no F2 was included. Vials in which one female or more than one male was missing were not scored. The whole assay was performed at 25˚C. Tested females Df(1)Hmr-; Hmr � /+ (Hmr � = an Hmr transgenic allele) were always grown with and compared to their respective control siblings Df(1)Hmr -;+/+, obtained from crosses between Df(1)Hmr-and Df(1)Hmr-; Hmr � /+. Rescue was measured as total offspring counted per female. In the rescue experiment, Hmr + served as a positive control (fertility rescue) and Hmr 2 as a negative control (no fertility rescue). For statistical testing, Wilcoxon rank sum test (non-parametric) was used for pairwise comparisons with FDR correction for multiple testing using ggpubr package (v0.4.0, using compare_means function with following settings: formula = offspring_per_mother~Hmr_allele, group.by = 'day', method = 'wilcox.test', p.adjust.method = 'fdr'). RNA extraction, cDNA synthesis and quantitative RT-PCR 2-10 pairs of ovaries were homogenized in Trizol (Thermo Fisher; cat. no. 15596026) and processed according to the manufacturer's instructions. RNA concentration and A 260/280 ratio were measured by NanoDrop. 1 μg of RNA was treated with DNase I recombinant, RNase-free (Roche; cat no.04716728001) followed by cDNA synthesis using SuperScriptIII system (Invitrogen; cat. no: 18080051), both following the respective manufacturer's protocols. For qPCR, equal volumes of cDNA for each sample were mixed with Fast SYBR Green Master Mix (Applied Biosystems; cat. no: 4385610) and run in LightCycler 480 Instrument II (Roche; cat. no: 05015243001) in a 384-well setup. Three technical replicates were used for each sample with 18s rRNA as the housekeeping gene. Annealing temperature for all the tested primers was 60˚C and the list of primers used are given on request. Plots for qPCR results were generated with R using ggplot2 package. For statistical testing Welch t-test was used with FDR correction for pairwise comparisons using rstatix package (v0.5.0). Characterization of a distinct HMR protein complex in Drosophila melanogaster To identify the proteins interacting with the hybrid incompatibility (HI) proteins HMR and LHR under native conditions, we used specific monoclonal antibodies targeting HMR and LHR to perform affinity purification coupled with mass spectrometry (AP-MS) from nuclear extracts prepared from D. mel SL2 cells. For HMR, we additionally validated our results by performing AP-MS with a FLAG antibody in cells carrying an endogenously tagged HMR (HMR endo ) [14]. These experiments revealed the existence of a set of four stable protein interactors shared between HMR ( Fig 1A) and LHR (Fig 1B). Besides HMR and LHR, this six-subunit complex contains nucleoplasmin (NLP) and nucleophosmin (NPH) as well as two poorly characterized proteins, CG33213 and CG4788, which we named Buddy Of HMR 1 (BOH1) and Buddy Of HMR 2 (BOH2), respectively. We confirmed the complex composition by AP-MS experiments using antibodies recognizing the individual subunits, with which we immunoprecipitated all subunits of the complex (Figs 1C and S1A-S1E and S2 Table). Moreover, all subunits largely colocalize in SL2 cells, which further supports our findings (S2 Fig and [15]). HMR and LHR have also been shown to interact and colocalize with the heterochromatin protein 1a (HP1a) [9,10,12,24,25], which is consistent with our finding that both proteins as well as all other complex subunits interact with HP1a (Figs 1C, S1A-S1E and S2 Table). However, in all AP-MS experiments HP1a is present in lower amounts than the other components of the complex. This may be due to the fact that it is not a stable component or only present in a fraction of the complexes. Similar to HP1a, the individual subunits of the complex are very likely also components of other protein complexes as we detect multiple proteins interacting exclusively with one or two components of the HMR complex (Figs 1C and S1, and S2 Table). In summary, our AP-MS results reveal the existence of a stable HMR/LHR containing protein complex under physiological conditions. As most subunits also contribute to other complexes, we wondered whether a surplus of HMR and LHR would affect complex composition. Overexpression of HMR and LHR results in a gain of novel protein-protein interactions The importance of a physiological HMR and LHR dosage has been shown in non-physiological conditions like interspecies hybrids or D. melanogaster cells where they are artificially cooverexpressed. The increased dosage of the two proteins results in their extensive genomic mislocalization [9,11,12]. We therefore hypothesized that the overexpression of HMR and LHR in pure species also results in a gain of novel interactions, which are potentially responsible for the novel binding pattern observed. Indeed, we confirmed 30 chromatin proteins that display a stronger interaction with HMR upon HMR and LHR overexpression ( [9], Figs 2A and S3 and S2 Table). These novel interactors include several proteins important for chromatin architecture such as the insulator proteins CP190, SU(HW), BEAF-32, IBF2 and HIPP1 or the mitotic chromosome condensation factor PROD. Thirteen of the novel interactors contain zinc finger DNA binding domains and three a MYB/SANT domain similar to HMR (Fig 1C and S3 Table). Intriguingly, one of the novel interactors is the product of the recently discovered missing third hybrid incompatibility gene gfzf. Finding GFZF as an HMR interactor under expression conditions that resemble the situation in hybrids, provides a molecular explanation for its aberrant co-localization with HMR both in hybrids and upon overexpression of HMR/LHR in tissue culture cells [11]. We also found that the ratio between HMR/LHR and the other complex components is lower in AP-MS experiments of ectopically expressed HMR/LHR (Fig 2B and 2C). Notably, under these conditions the HMR/LHR/HP1a ratio is less affected by HMR/LHR overexpression than the interactions with NLP, NPH, BOH1 and BOH2. Establishing to which extent these newly acquired interactors or the formation of a functional HMR complex contributes to HMR/LHR's physiological function and their lethal Differential interaction proteome between endogenously FLAG tagged HMR (HMR endo , n = 4) and ectopically expressed HMR (HMR wt-tg , n = 9). Only proteins enriched in HMR wt-tg or HMR endo vs CTRL (p < 0.05) were considered. Components of the HMR core complex and HP1a are shown in red, all other factors in blue. To display the differences within the HMR endo and HMR wt-tg interactomes, the enrichment of each putative interactor (Log2 (iBAQ HMR� /IBAQ control )) was normalized to the enrichment of the HMR protein used as bait. The resulting values were then plotted against each other. Dots below the diagonal indicate a stronger enrichment in the HMR endo pull down, the dots above the diagonal a stronger enrichment in the ectopically expressed HMR. (B) Differences of the ratio between HMR and members of the HMR complex and HP1a with and without ectopic expression of HMR. Plotted is the relative enrichment of each HMR complex member to the enrichment of the HMR used as bait (= the function in male hybrids, would provide further mechanistic details. To this end, we investigated the HMR interaction proteome upon ectopically expressing mutant HMR proteins in D. melanogaster SL2 cells. Two different Hmr mutations interfere with HMR complex formation and HMR localization Most of the Hmr alleles that rescue hybrid male lethality are either null mutations or mutations that dramatically reduce the level of HMR (Df(1)Hmr -, Hmr 1 , Hmr 3 , [4,9,16] and therefore don't provide further mechanistic insights regarding Hmr's role in hybrid incompatibility. The Hmr 2 loss of function allele, however, just carries two point-mutations: one within Hmr's third of four MADF domains and one in an unstructured region of Hmr. This third MADF domain is unusual in that it is predicted to be negatively charged and possibly mediates chromatin interactions rather than DNA binding [16]. Our previous experiments showed that HMR 2-tg mislocalizes when expressed in SL2 cells [9]. To test whether these phenotypes can be explained by altered interaction partners, we expressed a FLAG-tagged HMR protein carrying the point mutations found in Hmr 2-tg together with Myc-LHR in SL2 cells. A comparison of the interactome of ectopically expressed HMR 2 and HMR wildtype FLAG fusion suggests that this mutation disrupts the interaction between HMR and NLP, NPH, BOH1 or BOH2 while maintaining its interaction with LHR and HP1a (Fig 3A). Interestingly most of the factors that are picked up by ectopic HMR appear to be more represented in HMR 2-tg than in wildtype HMR purifications (Fig 3B) suggesting that the interaction with novel interactors is not sufficient for HMR mediated lethality in hybrids. Considering that the genetic interaction of Hmr and Lhr is critical for hybrid lethality [7], and the previously established physical interaction between the two proteins, we asked whether interfering with their physical interaction would result in a loss of function. As the C-terminal BESS domain of HMR has been suggested to be responsible for the HMR/LHR interaction [7,10,26], we recombinantly expressed either wild type HMR or C-terminally truncated HMR (HMR dC-tg ) together with LHR using a baculovirus expression system. Supporting previous evidence of a direct HMR/LHR interaction mediated by HMR BESS domain, we could copurify LHR only with full length HMR (S4A Fig). Consistent with this observation in vitro, HMR dC-tg also shows a substantial reduction of interaction with LHR and HP1a when expressed in SL2 cells (Figs 3C and S4B and S4C) while it still interacts with NLP, NPH, BOH1 and BOH2. Besides, a reduced interaction with LHR and HP1a, HMR dC-tg also interacts less efficiently with other heterochromatin components that are picked up upon HMR overexpression such as SU(VAR)3-7, HIPP1, PROD or CP190 (S4D- S4G Fig and S3 Table). These findings suggest that the deletion of the BESS domain does not lead to a complete disintegration of the HMR complex but specifically interferes with its binding to LHR, HP1a and other heterochromatin proteins. Therefore, the use of the Hmr dC-tg mutant allele allows us to selectively test for the functional importance of the HMR interaction with LHR and heterochromatin components. offset from the diagonal). Error bars reflect the standard error of the means (SEM). (C) Network diagram of all factors enriched in the AP-MS experiments of HMR endo or HMR wt-tg (p < 0.05). Red nodes represent HMR complex components and HP1a, blue nodes additional HMR binders. Nodes containing Zn-finger domains have a diamond shape. Solid edges connect the HMR complex and HP1a, dotted edges reflect protein-protein interactions predicted using the string database [22], dashed edges reflect protein-protein interaction identified by AP-MS experiments performed in this work (Fig 1 and S3 Table). In (A) and (B) proteins were labelled only if enriched in HMR wt-tg or HMR endo vs CTRL (p < 0.05). https://doi.org/10.1371/journal.pgen.1009744.g002 Fig 3. Two different Hmr mutations interfere differently with HMR interactome and HMR complex formation. Effect of Hmr 2-tg (A) and Hmr dC-tg (C) mutations on the HMR interaction with the HMR complex components and HP1a. Y-axis represents the Log2 fold-change of HMR 2-tg /HMR wt-tg and HMR dC-tg /HMR wt-tg , respectively, calculated after normalization of each sample to the enrichment of the HMR protein used as bait. Error bars reflect the SEM (HMR wt-tg : n = 9; HMR dC-tg : n = 10, HMR 2-tg : n = 3). Differential interaction proteome between ectopically expressed wild type or mutated HMR (HMR wt-tg versus HMR 2-tg (B) or HMR wt-tg versus HMR dC-tg (D)). Only proteins enriched in HMR wt-tg or HMR endo vs CTRL(p < 0.05) are shown. Components of the HMR complex are shown in red, all other factors in blue. To display the differences within each interactome, the enrichment of each putative interactor was normalized to the enrichment of the HMR protein used as bait. The resulting values were then plotted against each other. Dots above the diagonal indicate a stronger enrichment in the HMR wt-tg pull down, dots below the diagonal a stronger We next wondered whether the HMR C-terminal truncation and its concomitant loss of interaction with LHR and HP1a would influence its nuclear localization. A co-staining with antibodies against the exogenously expressed HMR and a centromeric marker (anti-CENP-C) revealed a rather diffuse nuclear localization of HMR dC-tg in SL2 cells, which is in sharp contrast to the full length HMR, which forms distinct bright (peri)centromeric foci (Figs 4A and S6A) [9,12]. To investigate the binding of ectopic HMR dC-tg to the genome, we performed a genomewide ChIP-Seq profiling of HMR dC-tg . HMR has been previously shown to have a bimodal binding pattern in SL2 cells [14]. One class of binding sites is found in proximity to HP1a containing heterochromatic regions whereas a second class is found along chromosome arms associated with gypsy insulators. Consistent with the failure to interact with HP1a, HMR dC-tg chromatin binding is specifically impaired at HP1a-dependent sites (Figs 4B, 4C and S5), leading to a substantial reduction of HMR dC-tg binding in proximity to centromeres in both metacentric chromosomes 2 and 3 (Fig 4B and 4C) as well as throughout the mostly heterochromatic chromosome 4 (S5C and S5D Fig). Altogether our results show that while the HMR C-terminus is required for HMR's interaction with LHR and HP1a and for localization in close proximity to centromeres, it is enrichment in the HMR mutated alleles (HMR 2-tg or HMR dC-tg ). All proteomic data are available at the PRIDE partner repository [39] Further details are available in S3 Table. https://doi.org/10.1371/journal.pgen.1009744.g003 Fig 4. The HMR C-terminus is required for HMR localization in proximity to centromeres and HP1a-bound chromatin. (A) Ectopic HMR dC-tg fails to form bright (peri)centromeric foci in SL2 cells. Immunofluorescence images of cells expressing different Hmr transgenic alleles (HA-Hmr wt-tg , HA-Hmr dC-tg or HA-Hmr 2-tg ) together with wild type LHR showing the co-staining of HA-HMR and CENP-C. Based on the overlap between HMR and CENP-C signals, cells were categorized in three groups (overlapping (blue), partially overlapping (yellow) and non-overlapping (grey)) and the number of cells belonging to each group quantified. The nuclear boundary is indicated by the white dashed line and the centromeres (as identified by a-CENP-C staining) by white arrows. Error bars represent standard error of the means (n = 2). Size bars indicate 5 μm. (B) HP1a-proximal binding is specifically disrupted in HMR dC-tg . Average plot of FLAG-HMR ChIP-seq profiles (z-score normalized) centred at high confidence FLAG-HMR peaks in 4 kb windows. HP1a-proximal (left) and non-HP1a-proximal (right) peaks are shown for HMR wt-tg (light blue) and HMR dC-tg (dark blue). (C) HMR dC-tg genome-wide binding is impaired in proximity to centromeres and mostly unaffected at chromosome arms. Chromosome-wide FLAG-HMR ChIP-seq profiles (z-score normalized) for HMR wt-tg (light blue), HMR dC-tg (dark blue) and HP1a (green). Chromosomes 2L, 2R, 3L and 3R are shown. PLOS GENETICS dispensable for HMR's binding to NLP, NPH, BOH1 and BOH2 and to genomic loci unrelated to HP1a. The Hmr 2 mutation in contrast does not affect HMR's ability to interact with LHR/ HP1a but weakens the interaction with the other complex components. The fact that both mutations impair the centromere proximal binding suggests that complex integrity is necessary for HMR's genomic localization. HMR subnuclear localization changes during Drosophila oogenesis It has been debated whether the HMR distribution we observe in SL2 cells reflects the physiological situation in flies [9,13,27]. In flies, HMR has been shown to bind to telomeric regions of polytene chromosomes [9,11], to mostly colocalize with HP1a in early Drosophila embryos [10] and larval brain cells [13] and to the centromere in imaginal disc cells [9]. Even in SL2 cells, where most of the HMR protein localizes close to the centromere, we detect some HMR foci that do not colocalize with centromeric foci and viceversa [12]. Unfortunately, all these somewhat contradictory results come from experiments performed under different conditions and with different antibodies, making it hard to compare the results. To have a more comprehensive view HMR's localization in flies, we used Drosophila ovaries as a model organ. This tissue contains different cell types at different stages of development, allowing the study of the distribution of HMR in comparable conditions within the same experiment. Ovaries are constituted of several ovarioles containing different developmental stages that mature in an anterior to posterior direction. Both somatic and germline stem cells originate from the germarium, at the anterior tip of the ovariole (Fig 5A). Germline stem cells (GSCs) divide asymmetrically to produce another stem cell and a germline cyst cell (GCC). The GCC undergoes 4 meiotic divisions to form a cyst of 16 cells one of which will differentiate in the oocyte. The others become polyploid nurse cells that feed the oocyte. The GSCs and the differentiated cyst, are surrounded by a layer of somatic cells termed escort cells (ESCs) and follicle cells (FCs), respectively [28,29]. In most cells, HMR colocalizes with HP1a-containing pericentromeric heterochromatin (Figs 5A and S7). However, in both FCs and GSCs we also observe a colocalization of HMR with the CENP-C labelled centromeric region (Figs 5A and S7). After migration to the posterior part of the germarium, the encapsulated egg chamber matures into a single oocyte surrounded by polyploid nurse cells and follicle cells. In both stage 3 oocytes and nurse cells, which have not fully polyploidized yet, CENP-C marked centromeres are still well visible and HMR colocalizes at least partly with them (Figs 5B and 5C). Following FCs development, we observe that as long as they are mitotically cycling (in early-stage egg chambers) HMR mostly colocalizes with CENP-C [9,10,12,13]. However, in late-stage egg chambers, post-mitotic and endoreplicating FCs show virtually no CENP-C signal and HMR localizes primarily to HP1a enriched domains (Fig 5D). All together our results show that within the same tissue, HMR can localize to centromeres in mitotically cycling cells, while diffusing into the pericentromeric HP1a-marked regions in polyploid cells where centromeres are disrupted. The HMR C-terminus is required for HMR's physiological function in D. melanogaster After verifying that the bimodal centromeric and pericentromeric localization of HMR can also be observed in the developing ovaries of D. melanogaster we wanted to investigate whether the C-terminus of HMR is required for HMR to fulfill its physiological function. To do this, we generated fly lines expressing full length HMR (HMR + ) or mutant forms of it (HMR dC , HMR 2 ). We crossed these alleles in a mutant background (Df(1)Hmr-, hereafter referred to as Hmr ko ) and performed complementation assays to assess if the transgenic alleles are able to rescue Hmr wild type functions (Fig 6A and 6B). All assays were done in ovaries, since HMR is well expressed and important for fertility and retrotransposon silencing in this tissue. After verifying the expression of the Hmr transgenic alleles (S6D Fig), we investigated their localization in follicle cells of sequentially developing egg chambers. In particular we looked whether the early-stage centromeric localization and the late-stage heterochromatic localization of HMR were rescued by the HMR dC (Figs 6C and S8). In late-stage follicle cells lacking CENP-C, truncation of HMR's C-terminus abrogates its localization to heterochromatic domains and results in a rather diffuse nuclear distribution (Fig 6C). This is in accordance with our ChIP-seq results in SL2 cells, where HMR dC binding to HP1a sites is substantially reduced. However, in early-stage follicle cells, unlike in SL2 cells, HMR C-terminal truncation does not affect its colocalization with centromere foci (S8 Fig). Instead, here HMR dC appears to be even more centromere restricted than the wild type allele. The observation that HMR dC mutation disrupts the HP1a-proximal localization of HMR but not the centromeric one, supports a bimodal binding model where HMR C-terminal domain is necessary for the interaction with HP1a heterochromatin while its N-terminus mediates the interaction with centromeres. The differences between tissue culture and follicle cells might be explained by affinity differences between wild type and mutant alleles of Hmr that are only revealed when an endogenous copy of Hmr is present, like in SL2 cells. This is not the case in follicle cells, where the transgenic HMR copies are inserted in an Hmr ko background and hence not in competition with endogenous HMR. As the silencing of transposable elements (TE) has been previously shown to be impaired by Hmr loss of function mutations or knockdown [9,10], we tested whether the Hmr dC or the Hmr 2 alleles were able to restore TE silencing in an Hmr ko background (Fig 6D). Whereas full length HMR was able to strongly repress all the TE studied, neither Hmr dC nor Hmr 2 were able to do so, showing expression levels comparable to Hmr deletion mutants. Since Hmr loss of function mutations have been shown to also cause a major reduction in female fertility [16], we also tested Hmr dC for the complementation of this phenotype. Similar to what we have observed for the TE silencing, the Hmr dC allele was unable to rescue the fertility defect ( Fig 6E and S4 Table). All together, these results show that the Hmr C-terminus is required for HMR localization and physiological function in D. melanogaster ovaries. The Hmr C-terminus is necessary for male hybrid lethality and reproductive isolation To understand whether the toxic Hmr function in interspecies hybrids also requires its C-terminus, we performed a hybrid viability suppression assay (Fig 7A). We therefore crossed D. melanogaster mothers carrying different Hmr alleles with wild type D. simulans stocks and counted the number of viable adult males in the offspring with respect to the total offspring (Fig 7 and S5 Table). In crosses with Hmr mutant mothers (Df(1)Hmr -), hybrid male offspring counts are comparable to those of females. Introduction of a wild type Hmr transgene (Hmr + ), fully suppresses hybrid male viability, while males carrying an Hmr dC or an Hmr 2 allele are still viable. These results indicate that the Hmr dC , similarly to Hmr 2 , has lost its toxic function in hybrid males. HMR and LHR reside in a distinct six subunit complex HMR and LHR are two well-known Drosophila chromatin-binding factors whose overexpression results in male lethality in hybrid animals. When expressed at native levels in D. melanogaster, HMR and LHR form a distinct protein complex containing 6 subunits. In addition to HMR and LHR, this complex is composed of two known histone chaperones NLP and NPH and two yet uncharacterized proteins, Buddy of HMR (BOH) 1 and 2 (CG33213 and CG4788. BOH1 is a putative transcription factor containing 4 Zn-finger DNA binding domains, which binds primarily to sites of constitutive (green) heterochromatin but also has a connection to components of (red) euchromatin [30]. Many of the BOH1 binding sites are also bound by LHR [31] and HMR [14], suggesting that BOH1 might play a role in recruiting the complex to specific genomic loci. While BOH1 contains 4 Zn-finger domains likely to bind DNA, no discernible domain can be identified in BOH2. Similar to HMR, LHR and BOH1 no ortholog of bars indicate 5 μm. (D) Defective retrotransposons silencing in Hmr dC and Hmr 2 . RT-qPCR analysis measuring mRNA abundance for the telomeric retrotransposons HeT-A and TART in female ovaries. Heterozygous genotypes (Hmr ko ; Hmr � /+) were compared with the respective non-complemented siblings (Hmr ko ; +/+). Y-axis: log 2 foldchange of the mean (complemented/non-complemented) after normalization to a housekeeping gene (18s-rRNA). Error bars represent standard error of the mean (n = 3). Welch t-test was used for pairwise comparisons with Hmr + as a reference group and FDR for multiple testing adjustment ( � p < 0.05, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001). (E) HMR C-terminus is required for female fertility. Fertility defects are not complemented by Hmr dC : heterozygous females (Hmr ko ; Hmr � /+) were compared with the respective non-complemented siblings (Hmr ko ; +/+). Number of adult offspring per female mother assessed in a time course (females aged 5-10, 10-15 and 15-20 days). Wilcoxon rank sum test was used for pairwise comparisons with Hmr + as a reference group and FDR for multiple testing adjustment ( � p < 0.05, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001). For details about fertility assays refer to S4 Table. https://doi.org/10.1371/journal.pgen.1009744.g006 4 and w 501 ). Suppression of hybrid male viability is analyzed within the transgenecarrying offspring only. Hmr � refers to any transgene used in this work. (B) HMR C-terminus is necessary for HMR lethal function in hybrid males. Suppression of hybrid males viability was measured as percentage of viable adult males in the total hybrid adult offspring. Crosses from non-complemented Hmr ko mothers were used as control. Dots represent individual biological replicates. Wilcoxon rank sum test was used for pairwise comparisons with Hmr + as a reference group and fdr for multiple testing adjustment ( � p < 0.05, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001). For details refer to S5 Table. https://doi.org/10.1371/journal.pgen.1009744.g007 BOH2 can be identified outside the dipteran lineage and no molecular analysis of it has been done so far. Here, we show that BOH2 interacts with HMR and LHR under native conditions but also with a set of other nuclear factors. Nlp and Nph are the two Drosophila paralogues of the nucleoplasmin family of histone chaperones, which are important for sperm decondensation upon fertilization [32], chromosome pairing and centromere clustering [33,34]. Both proteins depend on HMR to localize to the border between centromeric and pericentromeric chromatin [15]. Excess HMR and LHR interact with novel chromatin factors As hybrid animals suffer from increased levels of HMR and LHR, we performed AP-MS experiments of ectopically expressed HMR/LHR in SL2 cells in the presence of endogenous levels of HMR/LHR. This strategy allowed us to preferentially isolate proteins that interact with surplus molecules of HMR and LHR. Indeed, ectopically expressed HMR and LHR bind to several heterochromatic factors which are not detected under native conditions. Among many Zn-finger containing proteins that may explain the disperse localization of overexpressed HMR on polytene chromosomes [9] we observe a stable interaction of the extra HMR/LHR molecules with GFZF, another factor required for male hybrid lethality [8]. This finding is consistent with HMR and GFZF aberrantly colocalizing in interspecies hybrids and upon HMR/LHR overexpression [11]. HMR contains two functionally important protein-protein interaction modules Our proteomic analysis of Hmr mutants suggests that HMR's N-terminal MADF3 domain, which is mutated in the Hmr 2 allele, mediates the interaction with NLP, NPH, BOH1 and BOH2 while its C-terminus binds LHR, and through this interaction presumably recruits HP1a [9,16,24,35]. Further genome-wide and cytological experiments with these Hmr alleles reveal that the integrity of these interactions as well as a balanced expression of Hmr/Lhr is vital for HMR proper targeting, its physiological function, and its ability to kill hybrid males. In particular both our ChIP Seq data and ovaries stainings in early follicle cells show that HMR's C-terminus is necessary for localization to heterochromatin. Additionally, ovary stainings in follicle cells up to stage S7, where HMR wild type localization is centromeric, show that HMR'S C-terminus is instead dispensable for it's binding to the centromere. The latter finding is in apparent contradiction with the stainings we performed in SL2 cells, where HMR dC mutation disrupts centromeric localization. However, it is worth noticing that the experiments in SL2 cells were done in the presence of endogenous (wild type) HMR, and therefore reflect a competitive situation between HMR alleles, which could result in a more sensitive readout, revealing subtle differences between wild type and mutant HMR. The loss of HMR dC binding to centromeres in SL2 cells, is presumably happening because CENP-C, which is required for HMR's recruitment at centromeres [12], is already saturated by the endogenous HMR. In contrast, we found a very centromere restricted localization of HMR in mitotically cycling follicle cells, in which the transgenic Hmr allele was the sole source of HMR. These findings support a model in which HMR is recruited to centromeres by CENP-C in mitotically cycling cells. However, in absence of this recruitment HMR's C-terminus interacts with HP1-a containing heterochromatin, resulting in a pericentromeric localization instead. The functional assays in flies showed that both mutants (Hmr 2 and Hmr dC ) are not able to rescue the Hmr ko phenotype. These findings support the hypothesis that HMR needs to interact with heterochromatic factors as well as components of the chromocenter to accomplish its function. We therefore consider it unlikely that the Hmr mutant phenotypes observed in D. mel (reduced female fertility, upregulation of TEs) are solely dependent on HMR's ability to bind heterochromatin, since the mutant HMR 2 protein is still able to interact with LHR and HP1a but still does not rescue these phenotypes. At the same time, HMR's localization to the chromocenter is not sufficient to achieve full functionality, as the HMR dC protein still localizes to the chromocenter in absence of a wild type copy but nevertheless fails to rescue the fertility defect. We therefore propose that HMR organizes the chromocenter by directly interacting with centromeric as well as heterochromatic factors and that both interactions are required for HMR to fulfil its function. In fact, defects in chromocenter bundling have been shown to result in micronuclei formation and loss of cellular viability in the imaginal discs and lymph glands [36,37], a phenotype that is also observed in interspecies hybrids [19,20]. HMR's dual binding may be required for hybrid male lethality Mutations impairing HMR's ability to bind either heterochromatin or the centromere not only fail to complement Hmr null phenotypes in D. mel, but also no longer cause lethality of male hybrids of D.mel and D.sim. However, in contrast to the phenotypes discussed above, hybrid lethality is a consequence of overexpression rather than a loss of HMR and LHR [9]. Due to the technical difficulty to study the HMR complex in prematurely dying hybrid male flies, we simulated the hybrid situation by overexpressing HMR and LHR in D.mel tissue culture cells. As we had shown in the past that HMR mel and LHR mel interact with a very similar set of proteins to their D.sim counterparts when expressed in SL2 cells [9], we think their overexpression in this cell system constitutes a useful proxy for the hybrid situation. A comparison of the native HMR interaction proteome with the one of overexpressed HMR, revealed that overexpression leads to novel interactions with known chromatin factors such as BEAF-32, CP190, PROD, HIPP1 or GFZF [9,11], which has been shown to be important for hybrid male lethality [8]. These newly gained protein-protein interactions we observe in the presence of excessive HMR and LHR are probably the reason for an aberrant targeting of the complex, leading to a possible mislinkage of genomic loci and a failure to properly regulate the chromocenter in mitotically dividing cells. In combination, these effects will eventually result in defects in cell cycle progression [13,20]. Consistently, as neither of the Hmr mutations described here is able to simultaneously bind both heterochromatin and centromeric chromatin components, the expression of these alleles does not result in hybrid lethality. Interestingly, Jagannathan and Yamashita have recently shown [38] that hybrid incompatibility factors HMR and LHR lead to chromocenter disruption in hybrids, suggesting a complex interplay between the HMR complex and other factors that must be finely tuned to maintain functional chromocenter and viable cells, a condition that is not met in hybrids. While we still do not understand the detailed molecular mechanism that mediates the physiological function of the identified HMR complex, our results suggest that its ability to interact with two types of chromatin is of critical importance. It may very well be that the increased levels of HMR and LHR in hybrids pick up novel additional chromatin proteins thereby unleashing potentially lethal chromatin driver systems that evolved differently in the two closely related species. The isolation of a defined complex involving the hybrid incompatibility proteins HMR and LHR will allow a more detailed molecular analysis of its function and sets the ground for future comparative studies on the divergent evolution of its components within species and on their lethal interactions in hybrids. FLAG antibody targeting FLAG-HMR and western blot probed with anti-HA (HMR), anti-Myc (LHR) and anti-HP1a. (C) Western blot showing LHR immunoprecipitation in SL2 cells stably transfected with either full length HMR or a C-terminally truncated HMR dC along with Myc-LHR. IP performed with anti-Myc antibody targeting Myc-LHR and western blot probed with anti-FLAG (HMR) and anti-LHR. (D) Volcano plot highlighting interactions depleted in HMR dC . X-axis: log 2 fold-change of FLAG-HMR dC IPs (right side of the plot) vs FLAG-HMR + IPs (left side of the plot). Y-axis: significance of enrichment given as-log 10 p-value calculated with a linear model. HMR complex subunits are labelled in red. In blue are factors depleted upon HMR dC mutation (among the endogenous or overexpression-induced interactions of HMR). Unlabeled additional bait-specific interactors are listed in S3 Table. (E) GO terms depleted upon HMR dC mutation. (F) Volcano plot highlighting interactions depleted in HMR 2 . X-axis: log 2 fold-change of FLAG-HMR 2 IPs (right side of the plot) vs FLAG-HMR + IPs (left side of the plot). Y-axis: significance of enrichment given as-log 10 p-value calculated with a linear model. HMR complex subunits are labelled in red. In blue are factors depleted upon HMR 2 mutation (among the endogenous or overexpression-induced interactions of HMR). Unlabeled additional bait-specific interactors are listed in S3 Table. (G) GO terms depleted upon HMR 2 mutation. In (D) and (G) proteins labelled or used for GO search include endogenous or overexpression-induced interactions of HMR (i.e. enriched in HMR + or HMR vs CTRL with p < 0.05) and differentially enriched between HMR + and the HMR mutant analyzed (log 2 fold-change (HMR � /HMR+) < 1.5). (TIFF) Fig 4). (A) Heatmaps of ChIP-seq profiles (zscore normalized) centred at high confidence FLAG-HMR peaks in 4 kb windows. Peaks are grouped by HP1a class and sorted by the ChIP signal in native HMR ChIP. From left to right, anti-HMR ChIP in untransfected cells, anti-HMR ChIP in cells transfected with FLAG-Hmr + and FLAG-Hmr dC , anti-FLAG ChIP of cells transfected with FLAG-Hmr + or FLAG-Hmr dC , and anti-HP1a and anti-CP190. The latter two are representative of the two classes of HMR peaks: HP1a-proximal and non-HP1a-proximal. (B) Chromosome-wide FLAG-HMR ChIPseq profiles (z-score normalized) for Hmr + (light blue), Hmr dC (dark blue) and HP1a (green). Chromosomes X and 4 are shown. (C) Hmr dC is depleted at heterochromatin rich chromosome 4. Percentage of FLAG-HMR ChIP-seq peaks located on chromosome 4 for each replicate (n = 5). Hmr + (light blue) and Hmr dC (dark blue) are shown. P-values are obtained by a linear model. FLAG-HMR plots represent an average of 5 biological replicates. (TIFF) Fig 4A and 5). (A) The HMR C-terminus is required for HMR to form bright centromeric foci. Quantification of the percentage of centromeric foci (marked by CENP-C) associated with HMR in imunofluorescent stainings in SL2 cells expressing different Hmr transgenes (Hmr + , Hmr dC and Hmr 2 ). Stainings were performed with DAPI, anti-HA (recognizing HA-HMR) and anti-CENP-C antibodies. For staining details refer to Fig 4A. (B) The number of centromeric foci per cell inversely correlates with HMR's association with centromeres. Scatter plot displaying the relation between the percentage of centromeric foci associated with HMR (x-axis, binned by 10% units) vs number of centromeric foci per cell (y-axis). Each dot represents a measured cell (a pool of all experiments from all Hmr alleles is displayed). (C) The ectopic expression of Hmr mutants correlates with higher numbers of centromeric foci. Boxplots displaying the number of centomeric foci per cell (y-axis) for each of the Hmr alleles (x-axis). Each dot represents a measured cell (a Kenneth Boerner for advice on ovary staining. We also would like to thank Sophia Groh for graphical aid and Elisabeth Schröder-Reiter and Markus Hohle for constant support. We also would like to thank the entire Imhof group, Peter Becker and the Becker department for helpful discussions. In addition, we would also like to thank Stefan Krebs and Helmut Blum from LAFUGA facility for sequencing.
2021-08-25T06:16:41.657Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "d36d69b0b490ec6cb942abd6e8a057a6116b5163", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1009744&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b256e950bd8f34810256d4856fa332974b4ebdb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
228840542
pes2o/s2orc
v3-fos-license
In vitro and molecular docking studies of an anti-inflammatory scaffold with human peroxiredoxin 5 and tyrosine kinase receptor A new series of 4-(3-(2-amino-3,5-dibromophenyl)-1-(4-substitutedbenzoyl)-4,5-dihydro-1H-pyrazol-5-yl)benzonitrile (4a-h) compounds were synthesized and evaluated for in-vitro anti-inflammatory activities. The spectral (IR, NMR) and elemental analyses data of the product indicated the formation of new pyrazoles 4a-h. Compound 4e exhibited potent anti-inflammatory property with 85.45 % inhibitions. This value was compared with standard diclofenac sodium. This data is explained using molecular docking analysis of receptor- ligand binding. These results demonstrated that pyrazole derivatives are potential inhibitors of Human Peroxiredoxin 5 and Tyrosine kinase receptor in the treatment of inflammation related illness. Background: Inflammation is the natural defense mechanism of the body to deal with the infection and tissue damage [1]. However uncontrolled in diseases like chronic asthma, rheumatoid and osteo-arthritis, flammatory cascades is answerable for various multiple sclerosis, inflammatory bowel diseases and psoriasis, diabetic nephropathy [2] tumor initiation, and malignant progression [3]. Pain is the most common inflammatory indication needing medical attention and increase financial burden annually [4]. It is widely believed that deaths related to sepsis and sepsis will continue to rise. Research efforts in the field of sepsis have largely focused on the innate immune system and have conceptually viewed sepsis as a syndrome of hyper-inflammation [5,6]. Under this paradigm, overzealous activation of the host inflammatory response, ostensibly intended for pathogen eradication, becomes deregulated and consequently causes auto injury to the host which leads to ©Biomedical Informatics (2020) 930 multiple organ failure and death [7]. Peroxiredoxin 5 (PRDX5), also known as PrxV/ AOEB166/PMP20/ACR1, is a novel thioredoxin peroxidase widely expressed in mammalian tissues.z,4-6 PRDX5 may be addressed intracellularly to mitochondria, peroxisomes and the cytosol, suggesting that this peroxiredoxin may have an important role as antioxidant in organelles that are major sources of ROS, namely mitochondria and peroxisomes, and in the control of signal transduction due to its localization in the cytosol [8][9][10]. Moreover, the physiological importance of PRDX5 has recently been emphasized by its ability to prevent p53-induced apoptosis and to inhibit intracellular hydrogen peroxide accumulation by TNFα [11][12]. It is of interest to report the synthesis, biological evaluation and docking studies of a anti-inflammatory scaffold with binding features with protein target for further consideration. Materials and Methods: Without pre-cleaning, all chemicals were purchased commercially. In the open capillary tube, melting points were identified and uncorrected. On PERKIN ELMER 240 CHN analyzer, elemental testing was carried out. On a Shimadzu FTIR spectrophotometer in the 400-4000 cm-1 range with KBr pellets, an FT-IR spectrum of title compounds was recorded. The 1H NMR spectrometer was recorded with the solvents of DMSO and CDC13 on a 400 MHz NMR BRUKER AVANCE. The ppm was reported to have chemical shifts. As an internal reference to every NMR spectrum, tetramethylsilane (TMS) was used, with chemical shifts reported as standard in the case of units (parts per million). Analysis of the anti-inflammatory activities using HRBC membrane stabilization Method: The antiinflmmatory activity of compounds 4a-h was assessed by in vitro HRBC membrane stabilization method. Blood was collected from healthy volunteers. Th collected blood was mixed with equal volume of Alsever solution (dextrose 2%, sodium citrate 0.8%, citric acid 0.05%, sodium chloride 0.42%, and distilled water 100 mL) and centrifuged with isosaline. To 1 mL of HRBC suspension, equal volume of test drug in three diffrent concentrations, 100, 250, and 500°µg/mL, was added. All the assay mixtures were incubated at 37 ∘ C for 30 minutes and centrifuged. The haemoglobin content in the supernatant solution was estimated by using spectrophotometer at 560 nm [22]. Here, the negative control used was Alsever's solution with blood in it and it contained no Aspirin [15]. Molecular docking: Crystal structures of the protein complex used in this study were obtained from the the protein data bank (www.rcsb.org/pdb) [16]. Docking calculation was carried out using autodock 4.2 [17,18]. Gasteiger partial charges were added to the ligand atoms. Non-polar hydrogen atoms were merged, and rotatable bonds were defined. Docking calculations were carried out on DNA gyrase protein model. Essential hydrogen atoms, Kollman united atom type charges, and solvation parameters were added with the aid of auto-Dock tools [16]. Affinity (grid) maps of ×× Å grid points and 0.375Å spacing were generated using the autogrid program [16]. Auto-Dock parameter set-and distance-dependent dielectric functions were used in the calculation of the vander Waals and the electrostatic terms, respectively. Docking simulations were performed using the Lamarckian genetic algorithm (LGA) and the Solis & Wets local search method [19]. Initial position, orientation, and torsions of the ligand molecules were set randomly. All rotatable torsions were released during docking. Each docking experiment was derived from 2 different runs that were set to terminate after a maximum of 250000 energy evaluations. The population size was set to 1 50. During the search, a translational step of 0.2 Å, and quaternion and torsion steps of 5 were applied. (Figure 1). The spectral characterization (IR, NMR) and elemental analysis data of the product indicated the formation of new pyrazoles 4a-h. The absorption bands in the region 1598-1620 cm -1 for C=N stretching group [20] in pyrazoles 4a-h. A strong absorption bands in the region 2351-2360 cm -1 are ascribed to CN stretching. The carbonyl stretching vibrations are observed in the around1660 cm -1 . In title compounds, N-H [21,22] stretching vibration was observed ca. 3456 cm -1 , which supports the formation of pyrazole ring. The 1 H NMR spectra revealed the presence of a two doublet of doublet signals around 3.74 and 3.09 ppm are easily assigned to proton CHB and CHA protons, respectively in pyrazole molecule. The benzylic proton in pyrazole moiety appeared as a doublet of doublet in the region 4.86-3.95 ppm. The -NH2 protons are noticed by proton NMR spectra as a sharp singlet around 5.42-5.90 ppm. Additionally, aromatic protons are resonating as multiplets in the ©Biomedical Informatics (2020) range between 6.90-8.69 ppm. These signals confirm the formation of pyrazoles 4a-h. The designed compounds were studied for in vitro antiinflammatory activity by HRBC membrane stabilization method. The anti-inflammatory activity data ( Table 1) indicated that all the test compounds exhibited significant activity when compared to standard diclofenac sodium. The data obtained were presented in Table 1. All tested compounds offered adequate protection in a dose-dependent manner. The activity was increased with increasing concentration. The result of the in vitro membrane stabilization activity of synthesized pyrazoline (4a-h) is presented in Table 1 and Fig.2. According to these results all the compounds showed dose dependent inhibition of hemolysis. Compound 4f (IC50 = 159.1µg/ml) and 4b (IC50 = 180.3 µg/ml) displayed very good activity among the series as compared to standard Diclofenac sodium (IC50 = 127.3 µg/ml). Other compound 4g (IC50 = 185.0µg/ml) and 4d (IC50 = 192µg/ml) showed moderate activity and 4c (IC50 = 352.1µg/ml) and 4h (IC50 = 370.6µg/ml) had exhibited lower anti-inflammatory activity as compared to standard DCS. In an attempt to explain how our designed compounds interact with active site of the Human Peroxiredoxin 5 (Figure 3) flexible docking simulations were carried out to predict the receptor-ligand interactions by using autodock 4.2. The protein crystal structure Human Peroxiredoxin 5 (PDB ID: 1HD2) was obtained from the PDB (www.rcsb.org/pdb). The graphical depiction of protein is mentioned in Fig. 2. Compounds 4a- showed binding energy value of -5.95 kcal/mol and makes one hydrogen bond with THR147. It was noticeable that compounds with methyl group substitution (4d) at phenyl group had a significant impact on the activity. Compound 4e has highest binding scores in this experiment as shown Table 2. Figures. 4a and 5a shows three-dimensional binding pose of two active compound 4d with Human Peroxiredoxin 5. A hydrophobic interaction was observed between the ligand and protein. Compound 4e also showed π-alikyl interactions with THR44 and PRO40 amino acids. The compounds 4f and 4g showed similar binding energy. The replacing of phenyl group by pyridine ring (compound 4h) showed binding energy about -5.61 kcal/mol. Figures 4b and 5b shows three-dimensional binding pose of two active compound 4d with tyrosine kinase HCK. The X-ray crystal structure of protein tyrosine kinase Hck (PDB: 2HCK) is shown in Figure 6. The newly designed compounds 4a-h was docked into tyrosine kinase Hck protein to understand the binding interactions. Compound 4a has binding energy -5.41 kcal/mol with three hydrogen bonds viz. THR179, GLN529 and GLN528, when introducing bromo substitution (4b) in phenyl group. It is pertinent to note that the ligand 4b exhibit nice binding energy -5.84 kcal/mol. Table 3 compound 4b showed hydrogen interactions with GLN528, GLN526 and GLN529. In addition it has π-alkyl interaction with LYS203, THR523 residues. The compound 4c show nice binding energy -6.63 kcal/mol and makes hydrogen bond with GLN526 andGLN525 amino acid residues. Further, it makes π-alkyl interaction with GLU524, THR523, LYS203, THR179 and ARG155 residues. The binding pattern of ligand 4d with protein clearly revealed that the ligand polar interactions with ARG205, ARG175 and SER185. In addition, it showed π-alkyl ©Biomedical Informatics (2020) interaction with LYS203 and THR523. As seen from Table 3, ligand 4e showed higher (-7.4 kcal/mol) binding affinity within the protein and it forms hydrogen bond interaction with THR179 amnio acid. A polar interaction was observed between the ligand and protein. Compound 4e also showed π-alikyl interactions with GLN529, HIS201, THR523, ARG155 and ARG175 amino acids. The compounds 4f and 4g showed binding energies -5.99 and -5.23 kcal/mol. In this series, compound 4g shows the least binding energy. The pyridine substituted compound 4h gives binding energy -5.94 kcal/mol and makes one hydrogen bond with GLN526 amino acid residue. Further, it makes π-alkyl interaction with ARG155, ARG205, ARG175, SER185 and GLN529 residues. Articles published in BIOINFORMATION are open for relevant post publication comments and criticisms, which will be published immediately linking to the original article for FREE of cost without open access charges. Comments should be concise, coherent and critical in less than 1000 words.
2020-11-05T09:11:13.184Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "662bddb4f0c73881dd3daa9ecc11bde2b035cdb7", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/016/97320630016929.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "425f441320e41e42b677a7b2afe4ea77de88bf70", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
15183061
pes2o/s2orc
v3-fos-license
Biomarker correlations of urinary 2,4-D levels in foresters: genomic instability and endocrine disruption. Forest pesticide applicators constitute a unique pesticide use group. Aerial, mechanical-ground, and focal weed control by application of herbicides, in particular chlorophenoxy herbicides, yield diverse exposure scenarios. In the present work, we analyzed aberrations in G-banded chromosomes, reproductive hormone levels, and polymerase chain reaction-based V(D)J rearrangement frequencies in applicators whose exposures were mostly limited to chlorophenoxy herbicides. Data from appliers where chlorophenoxy use was less frequent were also examined. The biomarker outcome data were compared to urinary levels of 2,4-dichlorophenoxyacetic acid (2,4-D) obtained at the time of maximum 2,4-D use. Further comparisons of outcome data were made to the total volume of herbicides applied during the entire pesticide-use season.Twenty-four applicators and 15 minimally exposed foresters (control) subjects were studied. Categorized by applicator method, men who used a hand-held, backpack sprayer in their applications showed the highest average level (453.6 ppb) of 2,4-D in urine. Serum luteinizing hormone (LH) values were correlated with urinary 2,4-D levels, but follicle-stimulating hormone and free and total testosterone were not. At the height of the application season; 6/7 backpack sprayers, 3/4 applicators who used multinozzle mechanical (boom) sprayers, 4/8 aerial applicators, and 2/5 skidder-radiarc (closed cab) appliers had two or more V(D)J region rearrangements per microgram of DNA. Only 5 of 15 minimally exposed (control) foresters had two or more rearrangements, and 3 of these 5 subjects demonstrated detectable levels of 2,4-D in the urine. Only 8/24 DNA samples obtained from the exposed group 10 months or more after their last chlorophenoxy use had two rearrangements per microgram of DNA, suggesting that the exposure-related effects observed were reversible and temporary. Although urinary 2,4-D levels were not correlated with chromosome aberration frequency, chromosome aberration frequencies were correlated with the total volume of herbicides applied, including products other than 2,4-D. In summary, herbicide applicators with high urinary levels of 2,4-D (backpack and boom spray applications) exhibited elevated LH levels. They also exhibited altered genomic stability as measured by V(D)J rearrangement frequency, which appears reversible months after peak exposure. Though highly detailed, the limited sample size warrants cautious interpretation of the data. Chlorophenoxy herbicides remain one of the most commonly used pesticide products due to their efficacy in weed control, relatively low cost, and low acute toxicity in humans (1). Historically, epidemiologic studies conducted in the midwestern United States have suggested an association between chlorophenoxy use and non-Hodgkin lymphoma (2,3). Chronic, long-term animal studies do not support carcinogenic effects for this herbicide in its pure form (4). Early commercial products containing 2,4,5-T (trichlorophenoxyacetic acid) alone or in combination with 2,4-D (dichlorophenoxyacetic acid) were found to have dioxin and dioxin-like contaminants. These findings gave mechanistic support to the proposed connection between chlorophenoxy herbicides and lymphoma due to the immunotoxic effects of dioxins (5,6). A limited analytical chemical survey of chlorophenoxy herbicides products in current use did not suggest that the level of dioxin contamination in these commercial products poses a major health threat (7). In continuing work from our laboratory, we found that only one out of the seven commercial-grade chlorophenoxy herbicide products induced dose-related increases in micronuclei frequency in cultured human lymphocytes. These data suggest that the majority of the commercial chlorophenoxy products studied were not genotoxic at the chromosomal level (8). In other in vitro studies, we explored the possibility that commercial-grade cholorophenoxy herbicides might show endocrine-disrupting activity. Two commercial-grade products tested showed evidence of weak endocrine-disrupting effects in MCF-7 cells (9); a breast cancer cell line that is responsive to estrogen-mediated cell proliferation. With further review, we noted that adjuvants are sometimes used in conjunction with chlorophenoxy herbicides in roadside and other applications. We found that four out of four adjuvants induced significant increases in the frequency of micronuclei (8). Two of five adjuvants showed evidence of weak endocrine-disrupting activity in MCF-7 cells (9). As a corollary, earlier human studies by our group demonstrated modest alteration of male reproductive hormone levels in herbicide applicators but not in other pesticide use groups (insecticides, fumigants) (8) during the application season. The male reproductive hormones measured included follicle-stimulating hormone (FSH), luteinizing hormone (LH), and testosterone. Together, these hormones regulate spermatogenesis and sperm maturation (10). Prior molecular and chromosome studies indicated that herbicide application may differ significantly from fumigant application in terms of genotoxicity (11). In those studies, G-banded chromosome analysis demonstrated that chromosome damage was least frequent in applicators who only applied herbicides compared to applicators who applied herbicides and insecticides and those who, in addition, applied fumigants. The present study was designed to focus on exposures limited to herbicides, and if possible, to chlorophenoxy herbicides only, and to determine whether exposure to this herbicide class could contribute to endocrine disruption and to genotoxicity observed in earlier studies. In this effort, we took advantage of earlier studies by others where exposures to backpack sprayers, boom sprayers, and aerial applicators demonstrated marked differences in reported urinary levels due to differences in 2,4-D application methods (12)(13)(14). These diverse exposure scenarios offered an approach to generate an acute toxicant dose-related biologic response with use of appropriate biomarkers of toxicant effect. Similarly, one might avoid the confounding effects of exposure to more than one herbicide or pesticide class by focusing on acute exposure effects. The work presented below was undertaken to examine these hypotheses. laboratory, the coded specimens of blood were processed for cytogenetic analysis. Serum for later hormone analysis was cryopreserved at -80°C in Teflon vials. Urine specimens were transferred to specialized, chemically clean, Teflon-lined cryotubes and cryopreserved at -80°C for later pesticide analysis. The peak application time frame was determined through telephone review of the pesticide application schedule (application method, days applied, volume to be applied, number of applications, and duration of application). Urine and blood specimens were obtained from control subjects contemporary with those from exposed subjects throughout the application season. Each time a group of exposed subjects' specimens were processed, we included specimens from at least one or more control subjects. A second blood specimen was obtained from exposed subjects only within 6 weeks of the beginning of the following season's application work and compared to the earlier data set. The project was approved by the Institutional Review Board of the University of Minnesota and followed written informed-consent procedures outlined in the approval. Exposure assessment. Each state-licensed participant (exposed and control) provided their application records for the season's work. These records included product used, application rate, volume of pesticide/herbicide used, use and type of adjuvant used in conjunction with herbicide, and date and method of application. Included in this data set were the number of years of pesticide application work (seniority). Based on change in application practice and change in health status, one study subject (exposed) was excluded from our analyses. Analytic chemical procedures. Sample preparation. A 10-mL aliquot of urine was enriched with 13 C 6 -ring 2,4-D as an isotope dilution internal standard. The urine was acidified, then extracted with dichloromethane:diethyl ether (4:1). The extract was dried over anhydrous sodium sulfate, then concentrated to 100 µL with nitrogen using a TurboVap concentrator (Zymark Corporation, Hopkinton, MA). Instrumental analysis. The HPLC-tandem mass spectrometric (MS/MS) analysis was performed with an HP1090L HPLC (Hewlett-Packard Co., Palo Alto, CA) connected in-tandem to a TSQ-7000 triple quadrupole mass spectrometer (Finnigan MAT Instruments, San Jose, CA) equipped with an atmospheric pressure ionization (API) interface. Separation was achieved on a 25 cm × 4.6 mm Partisil 5 ODS-3 column (Whatman, Clifton, NJ), which was preceded in-line by a 20-mm guard column with identical sorbent to prolong the column lifetime. The solvent system consisted of acetonitrile:water (60:40) with 0.2% glacial acetic acid at a flow of 1 mL/min. Negative atmospheric pressure chemical ionization (-APCI) MS/MS was achieved by using nitrogen as a sheath gas and argon as the collision gas. No API auxiliary gas was used. The pressure of nitrogen entering the API unit was kept constant at 40 psi (276 kPa). The argon gas pressure was 2 mT. The API vaporizer and capillary temperatures were 450°C and 250°C, respectively. The discharge of the corona needle was 5 µA. The collision offset was set at 22 V for optimal fragmentation. The electron multiplier voltage ranged from 1,800 to 2,400 V. During an analysis, four product ions were monitored at a scan time of 0.25 sec/ion in the multiple-reaction monitoring experiment. One quantification and one confirmation ion were monitored for both the native 2,4-D and the 13 C 6 -ring 2,4-D. The confirmation ions represent the natural abundance of 37 Cl in the 2,4-D molecules. Data processing/analysis. Data were automatically processed by software supplied with the mass spectrometer. Each ion of interest was automatically selected, retention times calculated, and the area integrated. All data were checked for interference, peak selection, and baseline determination and were corrected if found in error. Because of the specificity of the MS/MS technique, interferences were rare. However, any interferences were easily recognizable because of a dramatic change in the ratio of the quantification ions to the confirmation ions of either the native analyte or the 13 C analogue. When necessary, the data were reanalyzed to reflect these corrections. The data were downloaded into an ASCII file and transferred from the UNIXbased operating system on the TSQ-7000 to a personal computer via an Ethernet connection. The data were imported into an R:BASE (Microrim, Redmond, WA) database specifically designed for this analysis. Reproductive hormone analysis. We measured luteinizing hormone (LH), follicle-stimulating hormone (FSH), and testosterone concentrations (total and free) in serum or plasma obtained from blood specimens donated by study participants. LH and FSH were measured using two-site immunofluorometric assays from commercially available kits (DELFIA catalog numbers 1244-031 and 1244-017; WallacOy, Turku, Finland) modified as previously described (15). Total and free testosterone were measured using solidphase radioimmunoassays (catalog numbers TKTTI and TKTF1; Diagnostic Products Corporation, Los Angeles, CA). All these hormone assays are validated for use with serum or heparinized plasma. Samples were stored at -80°C until assay. All samples were assayed in one batch. The intra-assay coefficients of variation for total and free testosterone were 6.76% and 5.19%, respectively, and 1.35% and 1.16% for LH and FSH. For detailed statistical comparisons and analysis of hormonal levels in exposed subjects, only those subjects who provided specimens at the height of the application season and within 6 weeks before the application of herbicides in the following year were included (n = 21 out of 24 exposed subjects). Chromosome studies. Specimen collection and cell culture. The standardized specimen collection and lymphocyte culture methods we used are detailed in earlier publications (11). Chromosome analysis. As a general rule, we examined 100 complete consecutive Gbanded metaphase cells per subject. For 96% of subjects, exactly 100 metaphase cells were examined, with 83-96 cells examined in the remaining subjects. Less than 10% of the metaphases examined contained < 46 chromosomes. All metaphases with rearrangements were photographed and karyotyped for breakpoint verification. These analyses were performed at the 400-band stage or greater. In this study, metaphase chromosomes with demonstrable discontinuity between chromosome segments but without loss of chromosome material are referred to as breaks, regardless of chromosome alignment. Otherwise, the International System for Human Cytogenetic Nomenclature (16) was followed for banded chromosome studies. For purposes of graphic presentation, the 400-band stage nomenclature was used. Chromosome readers were blind to exposure status. PCR-based V(D)J trans-rearrangement assay. Previously, members of our investigative group developed (17) a polymerase chain reaction (PCR)-based assay to define the frequency of occurrence of variable (V), diversity (D), joining (J) recombinase-mediated transrearrangements between a V segment from a T-cell receptor gamma (7p14-15) locus and a J segment from the T-cell receptor beta (7q35) locus. This rearrangement results in a chromosome 7 inversion. This abnormality occurs at low frequency in all individuals. The PCR-based assay described below provides a measure of genomic instability (18). In brief, genomic DNA was isolated by modified method of the Buffone procedure (19) and resuspended in deionized water at a concentration of 100 ng/µg. DNA concentration was measured spectrophotometrically and rechecked by agarose gel electrophoresis. We routinely extracted 1 µg DNA/1.5-2.0 × 10 5 peripheral blood mononuclear cells. In the first step to assay for recombination, DNA (125, 250, 500, and 1,000 ng) was suspended in a 50-µl solution containing 200 µM deoxynucleotides, 50 mM KCl, 10 mM Tris, pH 8.3, 1.5 M MgCl, 0.01% gelatin, and the a set of primers (Vγa and Jβ1a) at a concentration of 1.4 ng/µL, 10% DMSO, and 2.5 U of Taq polymerase. Negative and positive controls were run with each experiment. The reaction was carried out at 95°C × 4 min for denaturation, followed by 25 cycles of amplification consisting of 95°C × 15 sec, 57.5°C × 15 sec, and 72°C × 30 sec plus 6 sec increase per cycle. After 25 cycles, 10 min at 72°C was allowed for chain elongation. Ten percent of the first step reaction was nested using the same conditions with a b set of primers (Vγb and Jβ1b) at a concentration of 6 ng/µL. PCR products were run in 1.5% agarose electrophoresis gel, and a picture was taken of the ethidium bromide-stained gel before transfer to nylon paper. A c set of primers was used to verify Vγb/Jβ hybrids. A product was called positive for Vγ/Jβ rearrangements if it hybridized to both probes (Vγc and Jβ1c). Values were expressed as the reciprocal of the dilution titer per microgram of DNA. As before, blood samples obtained at the height of chlorophenoxy use were compared to samples obtained 8 months or more later for the exposed groups. Control samples for comparison were obtained throughout the pesticide application season. Statistical methods. Analyses of hormone levels were based on the differences between the logarithm of the hormone level at the peak of the application season and the logarithm of the hormone level several months after the application season. We used analysis of variance methods to test for changes in hormone level according to 2,4-D application method. Pearson correlation coefficients were used to quantify the relationship between urinary 2,4-D level and changes in hormone level across all application methods. To determine if chromosome aberration frequencies varied according to level of herbicide exposure, exact permutation significance levels based on the Wilcoxon rank-sum test were computed. We used Poisson regression analyses to examine the relationship between aberration frequencies and urinary 2,4-D levels. The frequencies of V(D)J rearrangements were compared among exposure groups using the exact Wilcoxon test and also by comparing the proportion of men with two or more V(D)J rearrangements using an exact trend test. Pearson correlation coefficients were used to quantify the relationship between urinary 2,4-D level and V(D)J rearrangement frequencies. All reported p-values are two-sided. Table 1 compares application method, urinary 2,4-D levels, and total volume of herbicides used for exposed and control subjects. Urinary 2,4-D concentration and exposure status. Urine specimens obtained within 24 hr of the peak application show an exposure gradient according to application method. The relative rankings for urine 2,4-D levels by application method are back pack sprayer > boom sprayer > aerial application> skidder> control subjects. These data are consistent with the expected differences in acute exposure for manual ground application (backpack) versus mechanical (boom sprayer), closed cabin (skidder-radiarc), or aerial application (helicopter or fixed wing) (14). There is a 10-fold difference in the mean urinary concentration levels (380.1 ppb) for all backpack and boom spray applications versus the pooled values of all aerial and skidder closedcab applications (33.2 ppb). Reproductive hormone analysis. The largest changes in hormone levels during the application season (Table 2) were increases in LH levels for backpack applicators (p = 0.053) and boom sprayer applicators (p = 0.089). The increase for both application methods combined was significant (p = 0.015). Using serum from 21 of 24 applicators, LH levels are directly correlated (r = 0.56; two-sided p = 0.006) with urinary 2,4-D levels at the time of maximum application. FSH and total and free testosterone were not correlated with the level of 2,4-D in urine at the time of maximum use of chlorophenoxy herbicides. However, after the application season, the levels of total testosterone were directly correlated (r = 0.37; two-sided p = 0.03) with the level of 2,4-D in the urine at the time of peak season use. Chromosome analysis. In these analyses of the chromosome data, we considered correlations among urinary 2,4-D levels and chromosome damage, applicator group and chromosome damage, and pesticide use volume and chromosome damage. Table 3 expresses the relationship between the total volume of herbicides applied during the application season and chromosome damage as measured in G-banded metaphases from human lymphocytes. Chromosomal translocations, inversions, deletions (TIDs), breaks, and gaps occur more frequently among applicators who apply more than 1,000 gallons of herbicide during the application season. As noted in Table 1, most of these men are aerial applicators who apply a broad spectrum of herbicides including 2,4-D. With regard to the possible relationship between urinary concentrations of 2,4-D and chromosome aberrations, regression analyses indicated nonsignificant, negative regression coefficients. Adjustment for tobacco use and cigarette smoking status had little impact on the analysis of the association between 2,4-D levels and chromosome aberration frequencies. Thus, acute, high-level exposure to 2,4-D as measured by urinary concentration, with or Articles • Genotoxicity and endocrine disruption of 2,4-D in foresters Environmental Health Perspectives • VOLUME 109 | NUMBER 5 | May 2001 without adjuvant use, is not associated with detectable chromosome damage in G-banded lymphocytes. V(D)J rearrangement frequency. Analysis of the data from exposed study subjects by application method showed that the frequency of two or more rearrangements is directly related to mean level of 2,4-D in urine (backpack sprayers > boom sprayers > aerial > skidder > control; p = 0.018) during the application season (Table 4). We compared the mean frequency of rearrangements per microgram DNA for backpack and boom spray application methods (3.36 ± 0.79), closed cab (aerial and skidder) methods (1.85 ± 0.45), and forestry controls (1.47 ± 0.31). The frequency of rearrangements in applicators performing hand-held applications was significantly greater than in control (p = 0.023) subjects. The rearrangement frequency for mechanized application was not significantly different from that for hand-held pesticide application (p = 0.14). Specimens collected and examined from the exposed groups 6 weeks before the beginning of applications in the following year show some differences. For backpack sprayers, none of the six had any detectable rearrangements. Consistent with our earlier work (17,18), these data show that rearrangement frequency varies with exposure status and is, in general, a transient event in healthy, exposed workers. Workers with more seniority and exposure to higher volume of different herbicides retain V(D)J region rearrangements over time (aerial applicators and skidder appliers). Minimally exposed foresters (controls) have a somewhat higher V(D)J rearrangement frequency (i.e., 1.47) than unexposed control subject values reported in our previous studies (i.e., < 1). Discussion Previous studies by Knopp (20) have demonstrated that urinary 2,4-D levels can exceed 1,000 ppb in workers employed in chlorophenoxy herbicide manufacture. By contrast, earlier reports dealing with forest pesticide applications suggest that urinary concentrations of 2,4-D arising from exposure occur within a range of 45-326 ppb (21). In the present work we examined firstvoided urine specimens from workers at the time of maximum use of 2,4-D (22). This strategy, commonly used in the occupational setting (23), takes advantage of the reported half-life of 2,4-D in humans (12-33 hr) (24), repeated subject exposure, and urinary excretion rate to optimize exposure assessment. In Minnesota, the maximum number of herbicide applications occur within a 6week period in late spring (May, June) and Backpack and boom sprayers who apply pesticides manually have higher urinary levels of 2,4-D than do aerial and skidder applicators (p < 0.001). A value of 0.3 was used in calculating means for individuals with 2,4-D levels below the limit of detection. a More than 100 pounds of granular herbicide was also applied. b More than 500 pounds of granular herbicide was also applied. early summer. The volume of 2,4-D use varies with application method ( Table 1). The measured urinary 2,4-D values strongly suggest that ground application by backpack or boom sprayer yields significant, acute exposure for the workers. Interestingly, six of seven backpack sprayers stated they used rubber gloves and wore rubber boots as protective gear. Five of seven backpack sprayers wore a protective suit. These data and other data from other application groups (not shown) suggests that the method of herbicide application is the most significant factor in personal exposure. The lack of correlation between urinary 2,4-D levels and chromosomal aberrations is not unexpected, as the overwhelming majority of animal and in vitro studies do not show evidence of genotoxicity for 2,4-D (4). In an earlier in vitro study by our group, some adjuvants used in conjunction with 2,4-D applications were found to be genotoxic (8). As a point of reference, adjuvants are chemical mixtures that commonly contain surfactants and oils used to increase the potency of herbicides. In the present study, it was possible to divide the ground application group who applied less than 100 gallons of herbicide (n = 7) into appliers who used only 2,4-D products and adjuvants (n = 3) and those who did not (n = 4) use adjuvants with 2,4-D. No significant difference in the frequency of chromosome damage between these two small groups was noted. In the remaining exposed subjects grouped by volume of total herbicide applied (i.e., 100-1,000 and >1,000 gallons), it was not possible to separate adjuvant and nonadjuvant use for applications of 2,4-D and other herbicides. All but two of the appliers in these exposure groups used adjuvants. However, the increased frequency of chromosome aberrations noted is related to the total volume of different herbicides used including 2,4-D. To what extent the cumulative chromosome effect observed can be related to adjuvant use is uncertain. Moreover, the diversity of adjuvant product formulations used in conjunction with 2,4-D, viewed in contrast to the direct correlation between 2,4-D levels in the urine with LH and V(D)J levels in blood, weighs against direct participation of specific adjuvants in these acute effects. Indirect participation of adjuvants through increased skin penetration of the herbicide remains a concern. The increase in V(D)J rearrangements observed herein connotes a transient increase in genomic instability with forester exposure to chlorophenoxy herbicides and or adjuvants. These findings are similar to those we reported in agricultural pesticide applicators and in patients undergoing chemotherapy in response to short-term, relatively high level exposure to known or suspected genotoxic agents (17,18). The parallel between these earlier findings and the current study is most clearly demonstrated in backpack sprayers who underwent short-term, high-level exposure to chlorophenoxy herbicides. In some aerial applicators and skidder applicators, V(D)J region rearrangements were retained over time. The mean age (40.5 years) of aerial and skidder applicator groups is not significantly different from the mean age (37.4 years) of backpack sprayers, but the mean seniority (years licensed) of backpack sprayers is significantly less (7.7 years) than that of aerial applicators (18.5 years). Seniority, differences in the volume of pesticide used by the different exposure groups, use of herbicides other than 2,4-D, or exposures unique to aviation may explain the persistence of rearrangements noted in aerial applicators. Considered at a mechanistic level, it is possible that repeated exposures to pesticides over time led to development of longlived "memory" T cells. These cells could account for persistence of the V(D)J region rearrangements (25). Foresters chosen as control subjects for this study held supervisory positions and were not actively engaged in herbicide applications. V(D)J rearrangements in excess of 1 per microgram of DNA in the minimally exposed control group may reflect exposure events unaccounted for in available records. This suggestion is supported in part by detectable 2,4-D in the urine of some of these foresters. With regard to hormone analysis, prior studies by our group (8) and by Straube et al. in 1999 (26) show some notable similarities and differences in reported changes in reproductive hormone levels during and after acute exposure to pesticides. Both those studies report significant increases in testosterone levels after the pesticide application season was completed. Increase in the reported testosterone levels in the current study after the application season were consistent with these earlier findings. In our earlier study, FSH levels were decreased at the height of the application season and LH values were increased in serum from herbicide applicators. In the present study and in the Straube et al. study (26), significant increases in LH were obtained at the height of the application season. Neither of the two earlier studies reported urinary levels of pesticides or herbicides. Direct correlation of urinary levels of 2,4-D with serum levels of LH at the time of highest exposure suggest a direct effect on hormonal levels by chlorophenoxy herbicides. Chronically increased secretion of LH by the pituitary in response to exposure to these products leading to significant increases in testosterone levels is consistent with our present understanding of testosterone cycling in response to LH stimulation of testosterone synthesis by the Leydig cells of the testes (10). Curiously, sustained high LH levels have been seen in association with exposure to dioxins (27). NS, no specimen. The proportion of subjects with two or more V(D)J region trans-rearrangements per microgram DNA were categorized by application method. These data were compared to urinary 2,4-D concentration at the time of maximum use (spring). During the peak application season rearrangement frequencies rank as follows: forester controls < skidder < aerial < boom spray < back pack (p = 0.018 using exact trend test). V(D)J region rearrangement frequencies are positively correlated (r = 0.54) with urinary 2,4-D levels (p = 0.003). a No winter specimen was obtained from one of seven backpack sprayers. The median and normal range for the clinical LH assay used in this study is 3.3 and 1.0-8.4 mIU/mL (n = 89 men). These data and our reported findings suggest that although the reproductive hormone data may be significant for the population; they are not of immediate clinical concern for the individual. It is not clear what impact these minor but statistically significant and repeatedly observed (8,26) reproductive hormone disruptions might have on male reproductive potential. From a different perspective and potentially of greater concern may be the effects of a minor increase in LH secretion on the menstrual cycle and ovulation. Whether small fluctuations of the level of LH can affect women's fertility is uncertain. Increased V(D)J rearrangement frequencies and LH levels positively correlated with the level of 2,4-D in the urine but did not correlate with total herbicide use or seniority. Together, these data further suggest that increased LH and V(D)J were due to acute, short-term exposure to 2,4-D used in conjunction with adjuvants. Finally, the apparent coincident effect of 2,4-D exposure on LH levels and the increased V(D)J region rearrangements once again lead to intriguing speculation regarding the relationship of reproductive hormonal status and the function of the immune system (28).
2014-10-01T00:00:00.000Z
2001-05-01T00:00:00.000
{ "year": 2001, "sha1": "e1dbc9bfe872aacc95b894f31a892717bd628a1d", "oa_license": "pd", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1240309/pdf/ehp0109-000495.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1dbc9bfe872aacc95b894f31a892717bd628a1d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
19104124
pes2o/s2orc
v3-fos-license
Bone mineral density changes of lumbar spine and femur in osteoporotic patient treated with bisphosphonates and beta-hydroxy-beta-methylbutyrate (HMB) Abstract Rationale: Currently available approaches to osteoporosis treatment include application of antiresorptive and anabolic agents influencing bone tissue metabolism. The aim of the study was to present bone mineral density (BMD) changes of lumbar spine in osteoporotic patient treated with bisphosphonates such as ibandronic acid and pamidronic acid, and beta-hydroxy-beta-methylbutyrate (HMB). Patient concerns: BMD and volumetric BMD (vBMD) of lumbar spine were measured during the 6 year observation period with the use of dual-energy X-ray absorptiometry (DEXA) and quantitative computed tomography (QCT). Diagnoses: The described case report of osteoporotic patient with family history of severe osteoporosis has shown site-dependent response of bone tissue to antiosteoporotic treatment with bisphosphonates. Interventions and outcomes: Twenty-five-month treatment with ibandronic acid improved proximal femur BMD with relatively poor effects on lumbar spine BMD. Over 15-month therapy with pamidronic acid was effective to improve lumbar spine BMD, while in the proximal femur the treatment was not effective. A total of 61-week long oral administration with calcium salt of HMB improved vBMD of lumbar spine in the trabecular and cortical bone compartments when monitored by QCT. Positive effects of nearly 2.5 year HMB treatment on BMD of lumbar spine and femur in the patient were also confirmed using DEXA method. Lessons: The results obtained indicate that HMB may be applied for the effective treatment of osteoporosis in humans. Further studies on wider human population are recommended to evaluate mechanisms influencing bone tissue metabolism by HMB. Introduction Osteoporosis is the most common metabolic bone disease in humans. The first definition of osteoporosis formulated by Albright in 1941 has stated that osteoporosis is characterized by too little bone tissue in bone. [1] In 1994 year, World Health Organization (WHO) enhanced this definition stating that osteoporosis is a systemic skeletal disease characterized by low bone mass and microarchitectural deterioration of bone tissue, with consequent increase in bone fragility and susceptibility to fracture. [2] The definition provided by WHO associated decreased bone mass (determined by measurement of bone mineral density [BMD]) with microarchitectural deterioration of bone tissue and susceptibility to bone fractures. National Institute of Health and International Osteoporosis Foundation updated previous definitions in 2000 stating that osteoporosis is skeletal system disease characterized by decreased mechanical endurance of bones that increases fracture risk, connecting various risk factors with decreased mechanical endurance of bones and osteoporotic fractures incidence. [3,4] Osteoporosis is diagnosed clinically when there is a presence of fragility fracture or BMD measured by bone densitometry that is less than or equal to 2.5 standard deviations below that of a young adult ethnic-and sex-matched reference population. The standard deviation value is described as T-score. T-score value between -1.0 and -2.5 indicates osteopenia, while T-score between +2.5 and -1.0 is considered to reflect normal bone mass status. [5] Z-score determined using bone densitometry shows the number of standard deviations of the measured BMD differing from the physiological range. The normative database is matched for age, gender, ethnicity, and body weight. [6,7] The determined Z-score reflects the difference from a demographically similar healthy population within physiological norm. Z-score value is usually less negative than T-score, especially with advancing age. Low Z-score value associated with low BMD indicates additional factors other than natural menopause and aging which have adversely affected skeletal system health. [6][7][8] Currently available approaches to osteoporosis treatment include application of antiresorptive and anabolic agents influencing bone tissue metabolism. Antiresorptive drugs may be effective in restoring skeletal balance by reducing bone turnover at the tissue level and result in diminished osteoporotic fracture incidence. [9,10] Bisphosphonates are widely used antiresorptive drugs for osteoporosis treatment. Bisphosphonates restrain bone resorption via inhibition of osteoclast recruitment and differentiation and enhanced osteoclasts apoptosis which finally leads to reduction of fracture risk. [11] Betahydroxy-beta-methylbutyrate (HMB) administration was shown to induce anabolic effects on bone tissue metabolism in experimental animals improving BMD, geometrical properties, and mechanical strength of bones in axial and peripheral skeleton. [12][13][14][15][16][17] However, studies on effects of HMB on skeletal system quality in humans are strongly limited. Thus, the aim of the study was to present BMD changes of lumbar spine and femur in osteoporotic patient treated with bisphosphonates and HMB. Materials and methods All procedures performed in this study were in accordance with the institutional ethical standards required obligatory for Medical University in Lublin, Poland. Description of patient history, densitometric measurements, and antiosteoporotic treatment In December 2009 (baseline), 63-year woman with family history of severe osteoporosis was subjected to diagnostic densitometry of lumbar spine and proximal femur with the use of dual-energy X-ray absorptiometry (DEXA) method and Lunar Prodigy Advance apparatus (GE Healthcare Lunar, Europe). The patient was subjected to hormone replacement therapy for over 5 previous years. The patient has reported family history of severe osteoporosis in her father, 14 years older sister, and 9 years older brother before examination. The lowest values of BMD T-score were measured in the 2nd lumbar vertebra (L 2 ) and Ward triangle (-2.8 and -2.3, respectively; Table 3). As the result of baseline densitometric examination, the patient was recommended to start antiresorptive treatment with bisphosphonates. Acidum ibandronicum (Bonviva, Roche Pharma AG, Germany) was taken orally once monthly in the dosage of 150 mg for over 2 years (25 months). After the 3rd DEXA examination in December 2011, the patient started therapy with acidum pamidronicum (Pamifos 90 mg per month intravenously, Vipharm SA, Poland). The patient was not diagnosed with any neoplastic disease concerning skeletal system or other tissues. The therapy with Pamifos lasted for 20 months until September 2013 when it was changed for calcium salt of beta-hydroxy-betamethylbutyrate (CaHMB). To monitor metabolic response of the skeleton to the antiresorptive treatment, the patient was also subjected to DEXA examination in April 2013. The therapy with CaHMB (HMB Mega Caps 1250, Olimp Sport Nutrition, Poland) was performed at the dosage of 1250 mg per day orally. One CaHMB capsule consists of 1000 mg of pure HMB. The HMB capsule was taken during the diner each day and the treatment was continued until July 2016. During the HMB treatment course, 3 subsequent DEXA examinations (February 2014, March 2015, and July 2016) of the patient were performed. All the densitometric measurements with the DEXA method were performed in the same diagnostic laboratory using the same apparatus (Lunar Prodigy Advance, GE Healthcare Lunar, Europe). To monitor metabolic response of axial skeleton to the treatment with HMB, the patient was subjected to densitometric examination of lumbar spine in March 2014 with the use of quantitative computed tomography (QCT) method. SOMATOM EMOTION SIEMENS apparatus (Siemens, Erlangen, Germany) equipped with Somaris/5 VB10B software (version B10/2004A) and Osteo CT application package was used to determine the volumetric bone mineral density (vBMD) of the trabecular and cortical bone compartments in each lumbar vertebrae (L 1 -L 5 ). Calcium hydroxyapatite (Ca-HA) density of trabecular bone was measured on cross-section of the vertebral body in the central part, while Ca-HA density of cortical bone was determined on the margins of the vertebral body, analogically for each vertebra. The results of the densitometric measurements were expressed in mg Ca-HA/mL. The lumbar spine was scanned together with the water-and bone-equivalent calibration phantom and the measuring scans were 10 mm thick and placed at 50% of the vertebral body length (Fig. 1). Moreover, T-score (20 years) and Z-score values were automatically determined. The following QCT examination of the patient was performed after 14 months (61 weeks) in May 2015. vBMD measurements of the lumbar spine were performed by the same radiologist and using the same equipment and software. Other medications and antiosteoporotic drugs were not used by this patient during the observation period between September 2014 and May 2015, and later. Independently from QCT measurements, densitometric measurement using DEXA method was performed in July 2016. Body weight and body mass index changes of the patient during the observation period are shown in Table 1. Ibandronic acid treatment Results of densitometric measurements of lumbar spine and proximal femur in patient at the baseline and after 1-and 2-year oral therapy with ibandronic acid are shown in Table 2. T-score and Z-score values corresponding to the BMD measurements are presented in Tables 3 and 4. Ibandronic acid treatment has not improved BMD values in L 1 -L 4 that was slightly decreased at 2 time point measurements when compared to the baseline value. Similar results were observed for all single vertebrae, except for BMD measured after 2 years from baseline for L 3 and L 4 where 0.001 and 0.007 g/cm 2 increases were observed. Except for the decline of BMD by 10% in upper femoral neck after 2 years from the baseline, all the other measurements in proximal femur 1 and 2 years from the baseline were increased as the consequence of ibandronic acid treatment. Total hip BMD increased by 0.005 and 0.020 g/cm 2 after 1-and 2-year therapy with ibandronic acid, respectively. Beta-hydroxy-beta-methylbutyrate treatment Results of BMD measurements in patient receiving HMB therapy are shown in Tables 2-4 (Tables 3 and 4). Results of vBMD measurements with the use of QCT method for trabecular and cortical bone compartments in lumbar spine of the patient are shown in Table 5. As the consequence of 14 month Table 3 T-score values in lumbar vertebrae and proximal femur measured with the use of dual-energy X-ray absorptiometry (DEXA) method in the patient at the baseline and subsequent visits. Table 5 vBMD of trabelular and cortical bone compartments in lumbar spine (mg Ca-HA/mL) measured with the use of QCT in the patient in March 2014 and after following 14-mo therapy with beta-hydroxy-beta-methylbutyrate. therapy with HMB, Tb Ca-HA of lumbar vertebrae was increased within the range of 0.10% to 4.50%, reaching an average increase of 1.98% for L 1 -L 5 . vBMD values of the cortical bone compartment have increased for all lumbar vertebrae within the range of 1.15% to 15.65%, reaching an average increase of 7.99% for L 1 -L 5 . The increased values of T-score (20 years) and Z-score by 0.07 and 0.12 were also stated during the 14-month period of HMB therapy. Discussion Bisphosphonates are pyrophosphate analogs with high affinity for bone hydroxyapatite. Bisphosphonates bind directly to mineralized bone inducing blockage of the bone surface and preventing osteoclast-dependent bone resorption. [18][19][20] Bisphosphonates may also inhibit osteoclastic activity and reduce the lifespan of the osteoclasts. [21] Ibandronic acid and pamidronic acid belong to the 2nd-generation nitrogenous bisphosphonates group (aminobisphosphonates containing nitrogen in an alkyl chain) which is considered as more effective for the treatment of osteoporosis than 1st-generation nonnitrogenous bisphosphonates. [20] The performed densitometric measurements in this study enabled monitoring of the effectiveness of ibandronic acid and pamidronic acid administration in osteoporotic patient with family history of severe osteoporosis. Ibandronic acid was administered for 25 months and its effect on bone mineral density was differentiated depending on the examined skeletal site. In lumbar vertebrae (L 1 -L 4 ), 7.6% and 1% decrease of BMD was observed after 1-and 2-year treatment since the basic densitometry. However, 1-year treatment with ibandronic acid increased BMD in all the investigated areas of proximal femur within the range of 0.6% to 4.5%. Similar increases of BMD were obtained in proximal femur regions after 2-year observation, except for the upper neck region where BMD was decreased by over 10% versus baseline value. The results of measurements of proximal femur BMD in the current study correspond to the results of meta-analysis of 34 studies on ibandronic acid treatment effectiveness in osteoporotic patients. It was shown that oral administration of ibandronate increased total hip BMD by 2.13% with the average duration of the ibandronate treatment 1.9 ± 1.06 years. However, the observed decrease of BMD values in lumbar spine in the current study seems to be opposite to the effects of ibandronate treatment reported in osteoporotic patients in the meta-analysis where 4.57% increase of lumbar spine BMD was reported. [22] In the other 24-month study on women suffering from postmenopausal osteoporosis, BMD has increased significantly relative to baseline in the group on continuous oral ibandronate therapy. The continuous ibandrnate therapy increased lumbar spine and total hip BMD by 5.64% and 3.35%, respectively. [23] Pamidronic acid treatment is recommended for patients with cancer that cause osteolysis. It is recommended for the prevention of skeletal-related events in patients with advanced solid tumors such as breast and prostate cancers. [24,25] In this study, pamidronic acid treatment was recommended for the patient without any neoplastic disease concerning skeletal system and other tissues. Pamidronic acid administration in this study lasted for 20 months; however, DEXA examination was performed after nearly 16 months of the treatment. Similarly to ibandronic acid treatment, antiosteoporotic effects of pamidronic acid treatment were differentiated in lumbar spine and proximal femur. In all the examined lumbar vertebrae, BMD determined by DEXA method was improved within the range of 2.8% to 11.2% when compared to the previous measurement performed in December 2011. BMD measured for L 1 -L 4 was increased by 5.8% as the consequence of the pamidronic acid treatment. However, except for the upper neck where 12.9% increase in BMD was observed, BMD values were decreased in the other examined regions of interest of proximal femur within the range of 1.3% to 2.9%. As shown in the previous study by Vis et al (2005), [26] intravenous administration of pamidronic acid at the dosage of 60 mg every 3 months was effective to improve BMD in lumbar spine and proximal femur. BMD values of the lumbar spine and hip increased significantly by 4.0% and 2.9% after 1-year pamidronate treatment. The effectiveness of pamidronate treatment was comparable to oral alendronate administration in patients suffering from osteoporosis. Moreover, intravenous infusion with pamidronate was suggested to be a therapeutic alternative for patients with gastrointestinal intolerance of oral bisphosphonates. [26] In the other 3-year study on patients suffering from postmenopausal osteoporosis, lumbar spine BMD was shown to be improved as the consequence of once monthly intravenous infusion of 60 mg of pamidronate. BMD was measured 3 times using DEXA in patients since baseline in 1year intervals and the therapeutic effectiveness of pamidronate and alendronate was comparable. [27] Oral treatment with pamidronate (150 mg/day) in postmenopausal women increased lumbar spine BMD by 9.4%. [28] In the study on postmenopausal women and men with at least 1 vertebral fracture, an increase of 14.3% of BMD of the spine was observed as the consequence of 5-year oral treatment with pamidronate. [29] As opposite to the current study, the negative therapeutic effects of the treatment with both ibandronate and pamidronate on skeletal BMD were not reported in previous studies. HMB is a metabolite of the essential amino acid leucine, and it is produced from alpha-ketoisocaproate by enzyme alphaketoisocaproate-dioxygenase. Experimental studies have suggested HMB to be the bioactive metabolite of leucine responsible for inhibiting proteolysis and for modulating protein turnover in vitro and in vivo. [30,31] Dietary administration with HMB was shown to induce numerous beneficial effects including increased lean body mass and muscle strength, stimulation of lipolytic processes and reduction of fat mass, anticatabolic, and anabolic activities including inhibition of protein degradation and stimulation of protein synthesis in skeletal muscles, as well as collagen synthesis and hydroxyproline formation improvement. [32,33] Studies in humans showing results of dietary administration with HMB on skeletal system properties are strongly limited. There is only one 12-week nutritional study performed in 8 men and 12 women (mean age 54 years) administered orally with calcium salt of HMB at the daily dosage of 3 g. It was shown that the treatment of rheumatoid cachexia with HMB (3 g of calcium salt), glutamine (14 g), and arginine (14 g) has improved whole body BMC of the patients by 5.9 g versus baseline measurements, while in the controls receiving isocaloric and isonitrogenous placebo whole-body BMC was decreased by 1.5 g. However, 3-month trial was relatively short to obtain significant metabolic response of bone tissue in whole skeleton in patients at a mean age of 54 years. [34] In this study, for the 1st time positive effects of 61-week oral administration with HMB on lumbar spine vBMD in osteoporotic patient were documented. The dosage used in the current study was one third of the daily dose used in the trial described by Marcora et al (2005); however, its duration was more than 5 times longer. Moreover, the patient in this study received calcium salt of HMB without an additional administration of glutamine and arginine. The positive effects of HMB administration on vBMD in lumbar vertebrae were differentiated depending on bone tissue compartment. Higher increase of vBMD (L 1 -L 5 ) reaching nearly 8% was observed in cortical bone in comparison to the trabecular bone compartment where nearly 2% increase was observed. Both these results prove that dietary administration with HMB in humans improves vBMD. It should also be highlighted that no side effects were reported by the patient during the whole HMB treatment course. Moreover, after approximately 12 months of the therapy with HMB the patient reported lack of previously experienced low back pain feeling. As opposed to the study performed by Marcora et al (2005) in which DEXA method was used for BMD and BMC determination, in the current study vBMD was measured using both QCT and DEXA techniques. Positive effects of nearly 2.5 year HMB treatment on BMD of lumbar spine and femur in the patient were also confirmed. Except for L 2 and total hip, BMD, T-score, and Z-score values increased in all the evaluated lumbar vertebrae and regions of proximal femur. The DEXA method provides combined results of BMD measurement in trabecular and cortical bone (expressed in g/cm 2 ), while QCT allows separate volumetric analysis of trabecular and cortical bone density (expressed in g/cm 3 ), independent of one another. The advantage resulting from this methodological approach is that the measurements of vBMD were performed independently for both the trabecular and cortical bone compartments of the axial skeleton. In contrast to DEXA method where bone size may affect BMD value, vBMD measurement with the use of QCT method provides results nondependent on bone size. Moreover, vBMD can be easy measured with the use of QCT without potential overestimating errors resulting from surrounding soft tissues volume, as well as possible osteoarthritic and osteophytic changes which can not be eliminated performing DEXA analysis. It is worth to underline that the number of degenerative and osteophytic changes increases in patients with advanced age. [35][36][37] The positive effect of HMB administration on vBMD in the described osteoporotic patient is in accordance to the previous studies on animals at the stages of systemic growth and osteopenia induction. Both prenatal and neonatal administrations with HMB in pigs and sheep have increased vBMD of the trabecular and cortical bone compartments and mechanical endurance of bones in peripheral (femur) and axial (lumbar spine) skeleton. These effects were associated with increased concentration of growth hormone, insulin-like growth factor I, and serum bone formation markers. [13,14] In the other study on pigs, HMB administration (0.05 g/kg of body weight/day -per os) throughout 7 months was effective to reduce the development of severe osteopenia induced by fundectomy performed on 40th day of life. HMB increased significantly in the fundectomized pigs mean vBMD (MvBMD), vBMD of trabecular bone, Ca-HA density of trabecular bone, Ca-HA density of cortical bone, BMD, BMC, ultimate force, ultimate stress, Young's modulus, stiffness, and work to the ultimate force point in lumbar vertebrae. The antiosteopenic effect of HMB administration was associated with improved amino acid metabolism and higher plasma concentration of valine, leucine, threonine, methionine, tyrosine, tryptophan, and arginine. [15] Similar antiosteopenic effects were observed in studies on ovariectomized rats with established osteopenia. Daily administration with water solution of CaHMB (1.9 g/L of drinking water administered ad libitum) throughout 2 months was effective to reverse osteopenia of femur and lumbar vertebrae (L 2 -L 4 ). BMD and mechanical properties of femur and lumbar spine were improved in ovariectomized and HMB-treated rats in comparison to the ovariectomized controls and the values obtained were comparable to those in shamoperated group. [12] Conclusions In conclusion, the described case report of osteoporotic patient with family history of severe osteoporosis has shown sitedependent response of bone tissue to antiosteoporotic treatment with bisphosphonates. It may be summarized that ibandronic acid treatment improved proximal femur BMD with relatively poor effects on lumbar spine BMD. Pamidronic acid therapy was effective to improve lumbar spine BMD, while in the proximal femur the treatment was not effective to improve BMD. A total of 61-week long oral administration with calcium salt of HMB in the patient improved vBMD of lumbar spine in the trabecular and cortical bone compartments indicating that HMB may be applied for the effective treatment of osteoporosis in humans. Positive effects of nearly 2.5 year HMB treatment on BMD of lumbar spine and femur in the patient was also confirmed using DEXA method. However, further studies on wider human population are recommended to evaluate mechanisms influencing bone tissue metabolism by HMB. It should also be explained whether exist relationships between HMB dosage and the response of skeletal system to the treatment.
2018-04-03T02:05:33.760Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "3e01f9ac4fbf41fec5d450a074e5a81148af96e6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000008178", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e01f9ac4fbf41fec5d450a074e5a81148af96e6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
51947593
pes2o/s2orc
v3-fos-license
Tryptase as a marker of severity of aortic valve stenosis Background Severe aortic valve stenosis is one of the most common cause of mortality in adult patients affected with metabolic syndrome, a condition associated with an active inflammatory process involving also mast cells and their mediators, in particular tryptase. The aim of this study was to characterize the possible long-term prognostic role of tryptase in severe aortic valve stenosis. Case presentation The baseline serum tryptase was measured in 5 consecutive patients admitted to our Hospital to undergo aortic valve replacement for severe acquired stenosis. Within 2 years after, the patients were evaluated for the occurrence of major cardiovascular events (MACE). The tryptase measurements were higher in patients experiencing MACE (10.9, 11.7 and 9.32 ng/ml) than in non-MACE ones (5.69 and 5.58 ng/ml). Conclusions In patients affected with severe aortic stenosis, baseline serum tryptase may predict occurence of MACE. Further studies are needed to demonstrate the long-term prognostic role of this biomarker. Background Severe aortic valve stenosis is one of the most common cause of mortality in adult patients affected with metabolic syndrome [1], i.e. a clinical condition characterized from visceral obesity that is traduced into insulin resistance, atherogenic dyslipidemia and proinflammatory state [2]. It is frequently due to an active process involving several pathways, including lipid infiltration, chronic inflammation, fibrosis formation, osteoblasts activation, and valve mineralization. Other causes are congenital valve defects, systemic inflammatory diseases, and endocarditis [3]. Prevalence is between 2 and 9% in subjects over 65 years and it increases significantly in forthcoming decades as a consequence of the ageing population and of more accurate diagnostic methods [4]. Severe aortic stenosis is defined by the presence of maximum aortic velocity ≥ 4 m/s, or aortic valve area ≤ 1.0 cm 2 , or by the presence of severe leaflet calcification and severely reduced leaflet opening. Surgical aortic valve replacement is indicated in symptomatic patients with severe high-gradient aortic stenosis, and in asymptomatic ones with severe aortic stenosis and left ventricular ejection fraction < 50% [5]. Its natural history results in the obstruction of the left ventricular outflow, followed by pressure overload and compensatory hypertrophy of the left ventricle. Excessive hypertrophy may decrease coronary blood flow reserve, increase collagen synthesis, interstitial fibrosis, and myocyte degeneration resulting in ischemic cardiac disease, sudden death and heart failure syndrome. Moreover, these patients have major risk of bleeding due to angiodysplasia, alterated platelets function and low concentration of von Willebrand factor [3]. High-sensitivity cardiac troponin T (hsTnT) is usefulness for risk stratification of severity and mortality [6]. However, recently some authors described the role of mast cells in calcified aortic stenosis [7] and an autoptic study detected these cells in the excised valves of patient undergoing elective aortic valve replacement in comparison with normal aortic valves from five healthy subjects obtained on autopsy served as negative controls Open Access Clinical and Molecular Allergy *Correspondence: elide.pastorello@ospedaleniguarda.it 3 Unit of Allergy and Immunology, Niguarda Ca' Granda Hospital, Piazza Ospedale Maggiore, 3, 20162 Milan, Italy Full list of author information is available at the end of the article [8]. In light of the above, we studied basal serum tryptase as a new serological prognostic biomarker in aortic valve stenosis. Tryptase is a mast cell serine protease that provides information about mast cell number, distribution, and activation depending on the clinical context [9]. In some cardiovascular diseases, this enzyme has important implications and represents an index of mast cells' burden [10,11]. In particular, in subjects affected with acute coronary syndrome we found higher basal tryptase values in so defined 'cardiovascular complex' patients than in 'non-complex' ones [12]. Moreover, in the same population the basal serum tryptase was significantly correlated to the development of major cardiovascular events' (MACE) up to 2 years, demonstrating a possible longterm prognostic role of this biomarker [13]. Cases report Herein, we described a total of 5 consecutive patients admitted to our Hospital from January 2015 to December 2016, to undergo aortic valve replacement for severe acquired stenosis. None was affected with autoimmunity diseases, severe allergies, cancer, renal failure, mastocytosis, refractory anemia, myelodysplastic syndromes, and hypereosinophilic syndrome. After admission, we collected from all the patients medical history, echo-cardiogram, serum tryptase, C-reactive protein, hsTnT, plasma glucose, and lipid parameters. Serum tryptase levels were measured by ImmunoCAP tryptase in vitro fluoroenzyme-immunoassay test (Phadia, now Thermo Fisher Scientific, Uppsala, Sweden), according to the manufacturer's instruction. Within 2 years after the aortic valve replacement, the patients were evaluated for the occurrence of MACE including myocardial infarction, cardiac arrhythmias, stroke, systemic embolism, heart failure and sudden death. Table 1 shows patients' clinical characteristics. At 2-year follow up, 3 patients experienced MACE: 1 died and 2 had acute coronary syndrome. In these patients tryptase levels were 10.9, 11.7 and 9.32 ng/ ml respectively, about twofold higher than in non-MACE ones: 5.69 and 5.58 ng/ml. Conclusions Our results could be in agreement with the literature of the last few decades, in which a relationship between high tryptase levels and the development of MACE in acute coronary syndrome patients was found, to demonstrate the tryptase role as a marker of the inflammatory and atherosclerotic process [13,14]. Indeed, in stenotic aortic valves mast cells secrete tryptase, chymase, cathepsin G and vascular endothelial growth factor inducing extracellular matrix degradation and valvular neovascularization [15]. In conclusion, we hypothesized that high tryptase levels may be a risk factor of development of MACE in severe aortic stenosis. Further studies on largest populations are required to confirm this hypothesis. Authors' contributions In particular, LML and EAP made substantial contributions to conception and design; MC, LF, and FL, made acquisition of data, and analysis and interpretation of data; CM, reviewed it critically for important intellectual content. All authors read and approved the final manuscript.
2018-08-08T07:20:59.681Z
2018-08-07T00:00:00.000
{ "year": 2018, "sha1": "0340415e8afaeb1b4740874d9fe9b60f0dc80fdd", "oa_license": "CCBY", "oa_url": "https://clinicalmolecularallergy.biomedcentral.com/track/pdf/10.1186/s12948-018-0095-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0340415e8afaeb1b4740874d9fe9b60f0dc80fdd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
159054802
pes2o/s2orc
v3-fos-license
MIGRATION POLICIES OF THE CZECH AND SLOVAK REPUBLICS SINCE 1989 – RESTRICTIVE, LIBERAL, INTEGRATIVE OR CIRCULAR?* Abstract The author compares the migration policies of the Czech Republic and Slovakia since 1993, including both immigration as well as integration. The text focuses mainly on the autochthonous policies of both countries in regards to labor migration as the main type of migration. Signifi cant immigration is a recent phenomenon in both the Czech Republic as well as Slovakia and neither immigration, nor integration policy belong among the priorities of either state. The Czech Republic seems to be more mature in adopting regulations for better access of foreigners to the labor market. However, when comparing the Czech Republic with the rest of Europe, it belongs to the most restrictive countries in terms of integration as well as in terms of immigration. Given the extremely low ratio of non-EU born adults becoming Czech citizens, the Czech Republic will remain an exclusionary democracy unless it changes either the voting rights or increases the naturalization rates by reducing the conditions for foreigners. Quite interestingly, even though Slovakia lags behind the Czech Republic in terms of integration policies and naturalization rates, it is more inclusive in terms of political rights. Introduction Czech and Slovak migration policies have gone through almost thirty years of development.In 2017, both countries amended their Alien Acts, which have signifi cantly changed the up-to-date practices of migrants.Even though both countries went through common historical and cultural trajectories, they have not followed the same patt ern since 1993 regarding the migration policies of their states, and they have shaped their migration policies diff erently.The aim of the article is to compare the migration policies of the Czech and Slovak republics since 1993. The text will regard migration in its broader concept.Migration policy is therefore understood as a set of tools regulating entry to, exit from and residence in the country.It has two components: immigration policy and integration policy.Immigration policy is understood as the regulation of entry and exit, while integration policy is understood as a set of tools off ering immigrants the opportunity to sett le in the host country and incorporate into the majoritarian society's social-economic and civic systems.In addition to that, the author is aware of the fact that the migration policies of both states have been shaped by international obligations and EU directives.Integration is understood as a process through which immigrants become full and equal participants in the various facets of the society.The article focuses mainly on the autochthonous policies targeting labor migration from third countries adopted freely by the Czech Republic and Slovakia rather than those which were adopted because of EU accession or any other international regulations 1 . Migration fl ows in the Czech and Slovak republics The two states of Czechoslovakia were ones of emigration rather than of immigration until the 1989 Velvet Revolution.The regime change constituted a milestone and both countries enjoyed a rise in immigration due to the liberal regime compared to both previous regime as well as later ones.Since 1993, the countries followed different paths -the Czech Republic became a country of immigration while Slovakia remained a country of emigration (Szczepanikova, 2013;Bolečeková, 2014).The location of both countries in Central Europe, accompanied with not very favorable political opportunity structures, made them transit countries for migrants from the east and south.Figures from 2016 show that the percentage of foreign born nationals is higher in the Czech Republic -4.4%, while in Slovakia it is 1.7%; the main acceleration of immigration was seen in the context of the EU accession. Even though the fi gures from the Czech Republic are far behind those of older member states (average 8-10%), the country is one of the new member states with the highest number of foreign born nationals, whereas Slovakia remains in the tail of the EU statistics.In the Czech Republic, the main countries of origin are Ukraine, Slovakia, Vietnam and Russia, while Slovakia is one of the few EU countries where most of the foreign population consists of nationals of other EU countries -the Czech Republic, Hungary and Poland.The number of foreigners in both countries has been gradually rising with a few exceptions (related either to stricter Aliens Acts, as a result of the economic crisis, or as a reaction to the EU accession); hence, within ten years' time there were fi ve times more foreigners in both countries than in 1993. The main purposes of immigration in both countries are employment, education and family reunifi cation.The majority of immigrants in the Czech Republic are employed in the low-skilled sector with the exception of the Slovaks.A large proportion of them are applicants for university studies, and the age and educational structure diff ers signifi cantly from the immigrants from Ukraine or other countries.In the case of the Slovak university students, the situation can be even described as a brain drain from Slovakia's point of view and brain gain from the perspective of the Czech Republic (Frank, Procházka and Stojar, 2018, p. 3).In Slovakia, the majority of immigrants are employed in the high-skilled sector, which is being interpreted as a lack of domestic highly skilled labor.Slovakia saw a sharp drop in the share of persons from third countries and a rise in the proportion of persons from the EU since 2000, while the latt er has prevailed since 2005.This implies the country's preference for immigration from other EU states.Greater representation of adult men also suggests a preference for labor migration as opposed to other types of migration, being concentrated in its capital, Bratislava, and other big cities.Labor migration became the 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 Foreign population (% of total population 1993-2013) Czech Republic Slovakia most rapidly developing of all migration fl ows to Slovakia in the 2000-2016 period (Divinský, 2017). Migration policy of the Czech Republic and Slovakia since 1989 Migration policies of both countries were formed by international treaties (UN, Council of Europe, and ILO) and were shaped by the EU accession in 2004 and the entry into the Schengen area in 2007.Last but not least, the 'migration crisis' contributed to the latest developments in the migration policies of both countries.The conceptualization of the migration policies is more or less the same as stemming from the international obligations, and includes both migration as well as integration (Ministerstvo vnitra České republiky, 2015; The Government of the Slovak Republic, 2011). Czech scholars usually categorize the evolution of Czech migration policy into periods based on the changes in the migration legislation and the general stance towards migration in the context of the overall economic situation (Drbohlav et al., 2010, p. 71).The fi rst phase (1990)(1991)(1992)(1993)(1994)(1995)(1996) is characterized by the fact that the migration policy was not among the priorities of the fi rst post-communist governments.The fi rst migration law valid for both parts of the Czech and Slovak federation was adopted in 1992 (Act no.123/1992 Coll.).It basically enabled anyone to sett le in the country without limiting any of his/her activities, and allowed an applicant to apply for both long-term and permanent visas directly in the country.Migrants could not apply for Czech citizenship.The permanent residence permit was granted, for the purpose of uniting a family, if the family member resided permanently in the ČSFR (Czech and Slovak Federal Republic), or in other humanitarian cases, or if justifi ed by the foreign policy interests of the ČSFR (Act no.123/1992 Coll.). The second stage (1996-1999) was characterized by the institutionalization of the migration policy as a reaction to illegal migration and rising unemployment, and culminated in the adoption of the Migration Law.The most important change was the introduction of permanent residence after the fulfi lment of ten years of consecutive residence, and it became possible to apply for the visa only outside the territory of the Czech Republic.The period culminated in the adoption of the new Aliens Act (no.326/1999 Coll.).The Czech Republic also revoked its non-visa agreements with many of the post-Soviet countries, which were the main migrants' countries of origin.The Asylum Law (no.325/1999) replaced the Refugee Law (no.498/1990 Coll.) and meant to harmonize with the EU laws.For the asylum seekers, it meant to grant access to the labor market, free movement in the territory of the Czech Republic, provision of accommodation, food and pocket money. The third stage (2000)(2001)(2002)(2003)(2004), called the consolidating era, meant the convergence of Czech and EU laws, the mobilization of the civil society and institutionalization of the migration policy.The Czech state initiated a regulated migration with a new project focusing on qualifi ed workers, initially from three countries (Bulgaria, Kazakhstan, and Croatia).Nevertheless, not many applications turned up, so the project was extended and more countries were included (Drbohlav et al., 2010, p. 79). The short fourth period (2005)(2006)(2007) started with the EU accession and is sometimes labelled as the neoliberal period.It was accompanied by low unemployment, economic rise and the steady rise in the immigration.The minimum period of consecutive residence needed for the permanent residence dropped to fi ve years, and the migrants with permanent visas gained a more secure position.The neo-restrictive (since 2008) period has been typifi ed by restrictive state policies based on security arguments (Kušniráková and Čižinský, 2011, p. 71).The revision of the Asylum and Migration Law (no.314/2015 Coll.) was planned in order to comply with the Common European Asylum System (CEAS), but the parliamentary debate and the outcome were also infl uenced by the 2015 migration wave.On one hand, this meant that asylum seekers had more open access to the labor market (the time limit dropped from 12 to 6 months), nonetheless, on the other hand, the time period for the decision was extended from 90 days up to 6 months, while the Ministry of Interior (MoI) was able to interrupt the procedure when the situation in the origin country was 'unstable'.Human rights activists criticized the 2015-2016 amendments for, among other things, extending the maximum time limit needed for the procedure, for not fully transposing the EU directive on the judicial review ex nunc, and for restricting lawyers from participating in the asylum seeker interviews with the MoI.However, the amendments off ered more liberal access to the labor market than did the EU laws (an asylum seeker is granted a work permit only after 9 months of residence, according to the EU regulations). Czech migration law continued on its restrictive path.The new law was passed two years later (in 2017) and resulted in more restricted conditions for residence.It was met with opposition from human rights activists, lawyers, and business unions, and it was returned by the Senate to the Parliament as not being in line with the Constitution.In the end, however, it was passed and signed by the Czech president.The new law meant lack of judicial review over the asylum procedure, more restrictions for both reuniting families and requests for permanent residence as such (Consortium of Migrants Assisting Organizations in the Czech Republic, 2017). Czech migration policy focuses on qualifi ed workers with higher prospects of integration (cultural, religious, linguistic factors).The proclaimed aim of the state is to integrate legal migrants into all aspects of society (economic, legal, social, cultural, language knowledge, health care, participation in the public life) (Vláda České republiky 2017, p. 7; Macáková, 2013).Illegal migration and poor integration of legal migrants, which can cause social tensions, were defi ned as security threats to the Czech Republic (Ministerstvo vnitra České republiky, 2016, p. 200). The Slovak migration policy followed a somewhat similar patt ern to its Czech counterpart, though with a bit of delay, and for a long time it lacked a comprehensive strategy on migration.The very fi rst, federal, and very liberal Alien Act of 1992 (Act no.123/1992 Coll.) was in place for four years.The second stage (1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) meant the institutionalization of the migration policy.The fi rst Slovak Alien Act and Refugee Act were adopted in 1995 (no.73/1995 and no.283/1995), and introduced the term asylum to replace the term 'refugee'.The Slovak authorities introduced a change in the application for permanent residence, which could then be applied for only from abroad.Unifi cation of family or foreign interest in Slovakia were the only reasons for granting permanent residence, and naturalization was not mentioned.The fi rst important agreement was the bilateral one with the Czech Republic, a special and non-standard one, on mutual employment of citizens, in force since 1994 and mainly benefi cial for the Slovaks working in the Czech Republic and valid until the accession of the countries into the EU.The second most important bilateral agreement signed in this time was the one with Ukraine.It was the result of huge emigration fl ows from this neighboring country to Slovakia and had been in force since 1998, though it had imposed limits on the number of persons.Despite the pressure from the Slovak employers and Ukrainian authorities, the Slovak state repeatedly refused to increase the quotas.The new and more comprehensive legislation came out only in 2002 (no.48/2002 and no.480/2002), and permanent residence was granted due to reasons of family unifi cation or Slovak special interest.An alien with the status of foreign Slovak has a specifi c position within the Slovak legal norms on migrants.The rights of such persons are guaranteed by the National Council of the Slovak Republic Act no.70/1997 on Expatriate Slovaks (as amended by the Act no.403/2000) (Divinský, 2004, p. 71).The basic principles of the migration policy were adopted at the governmental level in 1993 and shaped the migration policy until 2005.The fi rst government document dealing with integration was the Comprehensive Solution of the Process of Integration of Aliens with Granted Refugee Status, which was adopted in 1996 (Government Resolution no.105/1996).However, this dealt only with integration of aliens with refugee status, therefore with a very tiny group, leaving apart a far bigger community of other migrants.This group was addressed only due to the opening of the accession process with the EU and due to the rise in migration fl ow into Slovakia (Galanská, 2014). The third period of Slovak migration policy (since 2002-2004/2005) was characterized by the harmonization of the national legislation to the EU laws.Within the pre-accession process, Slovakia was obliged to pass a multitude of legal standards, including on migration.The fi rst somewhat comprehensive Concept of the Migration of the Slovak Republic was approved in 2005.It dealt with all kinds of migration, and also partially addressed the integration of migrants.Nevertheless, it still failed to address many issues related to both migration as well as integration.The fourth phase (2004/2005-2011) meant the adaptation of the rise in immigration -Slovakia opened its labor market to all workers from the EU/EEA/Switz erland without imposing any restrictions.This era was marked by the absence of a coherent migration strategy, idea or plan, no precedents of labor migration policy existed in that time in Slovakia, and the state was looking for a new conception of migration policy (Divinský, 2007, pp. 204-205).The comprehensive policy on migration came only in 2009 in the Concept of the Integration of Foreigners in the Slovak Republic (Uznesenie vlády Slovenskej republiky č. 338/2009).It dealt with all aspects of integration (education, social secu-rity, health care, naturalization, civic participation, local participation, etc.).This era culminated in 2011 when the Act on Residence of Aliens was passed (no.404/2011).It included various aspects on issues regarding foreigners.The most important and comprehensive tool of the Act was the Government Resolution no.574/2011, which touched upon every detail of migration and integration policy with the perspectives until the year 2020.Although the document is not detailed, it presents a large shift towards some comprehensive migration and integration policies (Galanská, 2014). Much like the Czech Republic, Slovakia decided to follow the path of regulated migration.The country focused on highly skilled EU workers while neglecting the workers from third countries, leaving them restricted access to the labor market, public employment services, the social safety net or access to self-employment.The lack of access to unemployment, maternity or housing benefi ts also hinders long-term integration, and non-EU migrant workers must leave Slovakia if unemployed. The fi fth era of Slovak migration policy (since 2011) was marked by the 'migration crisis', and for the fi rst time has put migration policy on the Slovak political agenda.Until then, immigration was rarely the subject of political debates, similarly to the situation in the Czech Republic.Slovakia adopted an amendment to the Alien Act in 2017 (the 8 th amendment in row); the changes targeted the defi nition of some terms, as well as rules for invitations' verifi cation.The proceedings for temporary residence for specifi c categories of migrants from third countries, working in strategic services or planning to realize an innovative business plan in Slovakia, have been shortened.Slovakia also adopted the European regulations dealing with seasonal workers and the mobility within an international company into the territory of Slovakia, and thus opened the labor market for the migrants from third countries within these categories.Much like in the neighboring Czech Republic where the European regulation tackling seasonal workers and mobility within international company has been already adopted, these workers nowadays only need a visa and a work permit.The amendment also paved the path for the integration of the long term migrants and workers from third countries who lose their jobs.These days they have sixty days to fi nd a new job.The validity of the blue card (work permission for highly skilled workers) was prolonged from three to four years, whereas in the Czech Republic the permit is issued for two years.Foreigners with a long term permit can now register at the state agency for employment. The Czech Republic hosted 493,000 foreigners altogether in 2016, while those with permanent residence reached 221,000, and those on long term residence 272,000 (Český statistický úřad, 2017).In Slovakia, there were 97,934 foreigners with legal permit in 2017, while the permanent residence of migrants from third countries reached 14,942 (Odbor analýzy rizík a koordinácie, 2017; Ministerstvo vnútra SR, 2018).The labor market in the Czech Republic is less restrictive for third country nationals than in Slovakia and non-EU immigrants are more active in the Czech Republic than in any of the 24 other European countries (MIPEX, 2011).Slovakia slightly opened its labor market to third country nationals while adopting EU regulations regarding sea-sonal workers, inter-company mobility, innovative projects and migrants working in strategic services.Most temporary residents have the right to become permanent residents after fi ve years in the Czech Republic if they reach the slightly restrictive requirements.They must have a fairly high income, proof of accommodation, a clean record and they must pay the fee for the A1 level language test.There are similar requirements in Slovakia (fi ve years of legal stay, proof of income, accommodation, health insurance, but no language test).Czech permanent residence is granted for ten years, while in Slovakia it is granted only for fi ve years.Foreign nationals also have to submit a medical assessment within 30 days proving that they do not have a disease which poses a threat to public health.An estimated 64% of the non-EU citizens became permanent residents in Slovakia, which is similar to the results in the Czech Republic (MIPEX, 2013). The citizenship of both countries is based on the ius sanguinis principle, so naturalization is related to blood line.However, both countries allow naturalization if certain conditions are fulfi lled -minimum of fi ve years of permanent stay in the Czech Republic (8 consecutive years of permanent stay for Slovakia, with some exceptions -e.g., recognized refugees and spouses, and minors), good command of the language and of the historical/ social/ economic/ geographical/ cultural facts, clean criminal record, proof of income, proof of not misusing the social systems.Compared to other European countries, the conditions are stricter -a foreigner can ask for naturalization after ten years of legal residence in the Czech Republic (fi ve years of temporary plus fi ve years of permanent residence), while in Slovakia it is after thirteen years of legal stay (fi ve years of temporary and eight years of permanent residence).This is one of the strictest restrictions in Europe regarding naturalization. While the Czech Republic renounced to the principle of a singular citizenship in 2014, and moved towards the principle of dual citizenship, Slovakia went the other direction and banned the dual citizenship in 2010.This change was a reaction to a Hungarian law which enabled dual citizenship and focused on Hungarians living outside of Hungarian borders -in Slovakia, Serbia and Romania.There have been several att empts to reverse the existing law and allow dual citizenship, and in 2015 a new regulation came up as a solution for those who had already lost their Slovak citizenship in favor of another citizenship.Dual citizenship could be applied for and granted 'when in the interest of Slovakia' or for other reasons, such as 'family unification, health reasons, humanitarian reasons or the fact that the applicant is a former citizen of Slovakia' (cf.Act no.186/2013 and Act no.40/1993 Coll.as amended in 2015; Ministerstvo vnútra SR, 2015).Both countries follow the European trends in requiring the fulfi lment of several conditions, including a good command of language and knowledge about the host state.Czech Republic chosen a liberal option regarding the length requirement, and decided on a fi ve year minimum residence requirement (as did Poland), while Slovakia chose the middle ground of eight rather than ten years (as did Spain).However, a comparison of naturalization rates (acquisition of citizenship per 100 resident foreigners) shows that both countries are at the tail of the EU-28, with the Czech Republic, Slovakia and Estonia having the lowest naturalization rates in Europe.As regards the political rights, foreigners from EU countries are the only ones entitled to vote and run in municipal council elections once they have met the age and residence requirements.A citizen of another EU member state also has passive as well as active voting rights in elections for the European Parliament.Other nationals are excluded both from the passive as well as active voting rights despite the fact that the country ratifi ed the Convention on the participation of foreigners in public life at the local level by the Council of Europe, valid from November 2015.Very few Czech politicians support voting rights for foreigners, the only enduring voice is the one of the ex-minister for human rights and equal opportunities, Jiří Dientsbier.All foreigners, including EU nationals, are also banned from joining Czech political parties or from forming their own.Having a large group of foreign nationals in the country, an estimated 225,000 non-EU adults (aged 15+) are disenfranchised in elections; this makes 2.6% of the total adult population with the highest level of disenfranchisement in Central Europe2 .On the other hand, Slovakia established more inclusive and extensive voting rights for non-nationals.Non-EU nationals with permanent residence have the right to vote in local elections, stand in local elections and vote in regional elections.So, only non-EU adults with temporary permits are disenfranchised.They make up 36% of all Slovak non-EU citizens.As in Czech Republic, non-nationals cannot form, join or donate to the political parties that they vote or stand for as candidates (MIPEX, 2015). Our fi ndings show that Slovak integration policies are more restrictive for foreigners than the Czech policies, with the exception of suff rage.According to MIPEX, the integration policy of the Czech Republic is the second best (23 rd place, after Estonia) of the post-communist countries, whereas Slovakia remains in the tail, ranking 34 th out of 38 countries.Still, the Czech Republic is described as only half-way favorable to migrants as compared to other countries.Education and political participation of the migrants was identifi ed as the weakest link of the Czech Republic, whereas Slovakia has shortages in fi ve out of eight policy areas -labor market mobility, education, health, political participation, and access to citizenship (Huddleston et al., 2015; Štefančík, 2010; Uherek and Černík, 2004). Two-dimensional restrictive-liberal and integrative-circular framework of Czech and Slovak migration policies As Kušniráková and Čižinský (2011) point out, the discussion regarding Czech migration policy has been channeled into two conceptual frameworks: one is measured in the liberal-restrictive framework, and the other in the transparent/non-transparent perceptions of the policy.The liberal-restrictive framework focuses primarily on discussions concerning the reasons for migration and the number of immigrants received, regardless of the level of rights acquired by immigrants during their tenure.As the terms 'liberal' and 'restrictive' cannot adequately describe all the aspects of the ongoing development, they suggest that the liberal-restrictive dichotomy should be extended to circular-integration oriented so as to bett er describe the stability of migrants' tenure.The scholars propose a four-pole model for graphical illustration, where the axis integration/circular is completely independent of the axis restrictive/ liberal.The restrictive/liberal axis is related to the immigration policies (diffi cult entry/easy entry).The integrational/circular axis represents the integration policy where the integration pole demonstrates the integration of foreigners into the majoritarian society while circular migration means temporary migration without the aim to sett le down (easy to sett le/diffi cult to sett le).The position of the countries is displayed in a diachronic manner (Kušniráková and Čižinský, 2011). The immigration policy of the Czech Republic was very liberal in the 1990s.It was easy to enter the country, and the non-visa regime with the post-Soviet countries was still in place.Nevertheless, the country was not prepared for the permanent residence of foreigners, and it preferred a circular policy.The foreigners had to ask every year for new permission without any prospects for staying permanently.Both the immigration and the integration policies were being built from scratch.The restriction of the immigration rules culminated in the Aliens Act in 1999, where the main change introduced was the inability to apply for permanent residence within the territory of the Czech Republic, and so the liberal approach where the foreigner was able to apply in the territory of the Czech Republic was revoked.This resulted in a decline in the number of foreigners.What's more, foreigners also had to apply to the change of purpose of their stay from abroad.Foreigners applying for a visa had to demonstrate that they had health insurance and enough fi nancial means for their stay when inquired at the borders.A criminal record statement was required for visas of over 90 days.On the other hand, the Aliens Act meant a sharp move towards the integration pole, as it envisaged the institution of permanent residency for foreigners who had lived in the country for ten years.The New Asylum Act also inclined more towards integrationasylum seekers were allowed to work immediately, live outside of the refugee camps and the integration program for recognized refugees became part of the law. The 2001 amendments of the Aliens Act meant to introduce some minor liberalization acts (the length of temporary residence was to be counted in the period of time needed for permanent residence) and also a move towards more integration.The accession to the EU meant a slight liberalization and higher legal assurance for the foreigners due to the adoption of the EU regulations.We observe also a slight move towards the integration pole -the foreigners could apply for permanent residence after only fi ve years of temporary residence.The country also launched a new pro-active policy targeting qualifi ed workers from selected countries (though without prospects for their further sett lement and also without much success).The Ministry of Labor and Social Aff airs became the main provider of integration of foreigners, which also meant a slight retreat from presenting migrants as a security threat.Nevertheless, in 2008 the competences regarding the integration of foreigners were transferred back to the MoI, and a sharp turn towards the restriction pole via a reduction in work permits and a decrease of foreigners with permanent residence was observed (Kušniráková and Čižinský, 2011;Drbohlav et al., 2010;Vašečka and Košťál, 2009). The 2015 Asylum Law meant on the one hand to achieve the liberalization in terms of access to the labor market for the asylum seekers, and, on the other hand, it meant a tighter control by the MoI, because of its position as an absolute arbiter.The 2017 Aliens Act meant another move towards the restriction pole, while at the same time a move towards circular migration and the repressive notion of legal migration.Besides the already mentioned restrictions, the Act introduces the category of 'unreliable employer'.Unreliable employers are those who are in debt, who do not pay for social insurance for their employees or who hire illegal employees.As such, they are prohibited from employing foreigners.The Czech Republic is in line with the countries which employ rigid employment protection legislation.A very signifi cant move towards easier access to the Czech labor market meant the introduction of the non-visa regime for Ukrainians as of June 2017.Although this is an EU regulation it has a large impact on the Czech Republic, as most of its migrant workers come from Ukraine. Figure 4 shows a graphical illustration of both the Czech Republic and Slovakia in terms of migration and integration policies.The author is aware of the subjectivity of the matt er, and that the portrayal of moves is very approximate, as all phases were marked by the adoption of both restrictive as well as liberal regulations, and the same is valid for the integration/circular axis.In Slovakia, the Alien Act of 1995 saw the fi rst restrictions imposed on the previously liberal 1992 law that had placed no restraints on foreigners.This move towards restrictiveness meant that foreigners could now solely apply for permanent residence only from abroad.We observed that some regulations have focused on integration and dealt with permanent stay (unifi cation of family, special foreign interest of Slovakia), though the concept of naturalization was still not introduced in the Act itself and nor was there a coherent migration policy in terms of immigration or integration until 2011.The only milestone was the accession of the country to the EU which meant the transposition of the EU regulations and opening of the labor market for EU nationals.The country started to focus on EU workers while neglecting the long-term integration of non-EU nationals.The new comprehensive Alien Law (no.404/2011) introduced temporary residence for many purposes (business, employment, study, unification of family, etc.) though tied to one of those without the possibility to change the reason.The migrants could apply from Slovakia when being there legally.The new law introduced also the new Blue card for highly qualifi ed workers, and expanded the labor market for foreigners.The law newly introduced permanent residence after fi ve years of legal residence as proof of more pro-integration policy, though the 2010 Citizenship Act banned dual citizenship in a move which restricted integration.The 2017 amended Aliens Act meant a move towards both the liberal and integration poles which improved both access to the labor market as well as integration policies for migrants from third countries.Nevertheless, one has to keep in mind that these were only EU regulations which the Czech Republic had adopted long before. Conclusions Signifi cant immigration is a recent phenomenon both in the Czech Republic as well as Slovakia, and neither immigration, nor integration policy are among the priorities of the two states.For a long time, both countries lacked coherent immigration and integration policies, and these policies developed slowly (2009 in the Czech Republic and 2011 in Slovakia) with the rise of immigration and were accompanied by the need to adapt to EU regulations.Legal practices towards EU and non-EU citizens are quite diff erent in both countries.Third country nationals face the greatest restrictions on their employment; they are not entitled to have access to the social support system during temporary residence.Unlike elsewhere in Europe, both countries lack a coherent integration program for newly arrived immigrants, and there are no systematic language courses or trainings. The Czech and Slovak Republics have gone through almost thirty years of migration policies set up in the new democratic era.We observe that the development of migration policies of both states was dependent upon EU accession as well as a rise in immigration.The key dates regarding migration in both countries were EU accession in 2004 and joining the Schengen zone in 2007, and therefore both countries comply with the basic minimum legislative migration framework.The Czech Republic is among the most important immigration countries in Central and Eastern Europe (CEE), though still with a very small immigration community when compared to Western European countries.Slovakia, on the other hand, has a very low immigration rate compared to the Czech Republic and a high rate of emigration of highly skilled Slovaks.This detail contributes to the fact that emigration (threat of brain drain vs. economic gain) is included in every migration policy of the Slovak Republic.The Czech Republic has low emigration of natives, and therefore does not face the problem of brain drain.This paper presented an overview of the most important developments in the fi eld of migration policies in its broader sense in the Czech Republic and Slovakia.Both countries built their migration policies from a blank slate, and the migration policies had to be invented from scratch.The Czech Republic seems to be more mature when compared to its neighbor.Both countries employed initially very liberal policies in terms of migration but became more restrictive over time.Both countries had initially no integration policies and tended to circularity, adopting integration policies only on the EU path.It seems that the rise of immigration also meant a U-turn in the Czech migration policy, returning to the restrictive pole in terms of immigration and to the circular pole in terms of permanent sett lement, though Slovakia is very well behind Czech development.The Czech Republic is more systematic in the fi eld of integration and admits foreigners in the labor market, though it is still criticized by human rights activists.The Czech labor market is more open to the foreigners, unlike Slovakia where EU regulations dealing with seasonal workers were adopted only recently.As for the integration of foreigners, Slovakia inclines towards circular migration, enacting stricter conditions for temporary residence and naturalization.On the other hand, Slovakia is more inclusive in giving foreigners more extensive political rights.Unlike in Slovakia, Czech law is up to date with adopting the possibility of dual citizenship, though upon the condition that the applicant has not been a burden to the state social support system so far. Both countries employ a highly centralized approach, with the Ministry of Interior playing a key role.Integration of foreigners in Slovakia is managed by the Ministry of Labor, Social Aff airs and Family; in the Czech Republic, this was only temporarily the case (Ministry of Labor and Social Aff airs, 2014).This temporary provision was seen as a symbolic change of public discourse, where migration was understood from the perspective of demographic changes and not security threats.Nevertheless, in 2008, the coordination role in the implementation of the integration of foreigners was transferred back to the Ministry of Interior, and the security discourse became prevalent while a human rights discourse is lacking. The Czech Republic is more mature in the sense that civil society is highly engaged in lobbying for, drafting and commenting on new legislation -the Consortium of Migrants Assisting Organizations was set up in 2003, and currently works as an umbrella for 18 NGOs working with and assisting migrants.Slovakia off ers a very diff erent picture -migration policy was more or less a marginal topic until the EU accession, and it has only slowly accelerated and the few civil organizations do not have much to say in the legislative process. What kind of migration policies do the Czech Republic and Slovakia have, therefore?The policies of both countries have been shaped by EU regulations and by increasing immigration.The starting point was the same for both, and both states had to create the migration policies from a blank slate.When comparing the Czech Republic with Slovakia, the Czech Republic seems to be more grown up, adopting regulations for bett er access of foreigners to the labor market.However, when we compare the Czech Republic with the rest of Europe, it belongs among the most restrictive countries in terms of integration as well as in terms of immigration.Given the extremely low ratio of non-EU born adults that become Czech citizens, the Czech Republic will remain an exclusionary democracy unless it either changes its voting rights or increases the naturalization rates by decreasing the conditions for naturalization.Interestingly enough, even though Slovakia lags behind the Czech Republic in terms of both integration policies and naturalization rate, it is more inclusive in terms of political rights.It would therefore be worth comparing the migration policies of all of the European states, which is unfortunately beyond the scope of this article. Figure 2 : Figure 2: Total fi rst residence permits issued to non-EU citizens per 1,000 inhabitants Source: Eurostat, 2015 Figure 4 : Figure 4: The development of migration policies in the Czech Republic and Slovakia in 1989-2017 Source: Author's work
2019-05-21T13:04:52.483Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "d0a6acdeec7f5da37610912bf294ff921ad11b14", "oa_license": "CCBY", "oa_url": "https://rtsa.ro/tras/index.php/tras/article/download/590/579", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d0a6acdeec7f5da37610912bf294ff921ad11b14", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
250317172
pes2o/s2orc
v3-fos-license
R. Stephen Berry and the Berry pseudorotation R. Stephen Berry (1931–2020) was a Harvard-educated American pioneer of molecular structure studies. He is most famous for the phenomenon of Berry pseudorotation and his studies of intramolecular motion and molecular fluxionality. This remembrance focuses on this discovery. He had broad interests in many other aspects of structural chemistry and physical chemistry and also in the economics of energy. R. Stephen Berry (1931Berry ( -2020 Fig. 1) was James Franck Distinguished Service Professor, Emeritus, at the University of Chicago at the time of his death. He was a most original and influential physical chemist with major contributions to the science of structures. Obituaries reviewed his accomplishments that concerned a broad spectrum of the physical sciences (see, e.g., [1]). In 1995, one of us recorded a long conversation with him about his career with an emphasis on his discoveries related to intramolecular motion and, in particular, what has become known as Berry pseudorotation [2]. In this remembrance, in addition to a general brief review of his career, we focus on Berry pseudorotation and its implications. We stress that this was only a small fraction of his contribution to the science of structures. Origin, education, career Berry was born in Denver, Colorado. His father was in the real estate business and his mother was a teacher. He was 6 years old when he received a chemistry set from his parents as a Christmas present. Theirs was a Jewish family, but it celebrated both Christmas and Hanukkah. By the time he was at the junior high school, he had built up a chemistry lab in the basement of their house. In ninth grade, he wrote a report about his career choice, and it was about chemistry and physics. It took some courage on his part to go in that direction because at the time anti-Semitic discrimination hindered Jews to get a position as a scientist. The influx of European refugee scientists and their contribution to American defense during World War II helped changing the situation. By the time Berry started his career, he did not experience discrimination to any significant degree. His career choice was strengthened by Paul de Kruif's Microbe Hunters and also by Milton Silverman's Magic in a Bottle and Bernard Jaffe's Crucibles: The Story of Chemistry from Ancient Alchemy to Nuclear Fission. Contrary to general experience, he found chemistry more exciting than physics in high school. This is a good example of how strong influence textbooks and especially teachers may have. According to Berry, as he reminisced decades later, physics as it was being presented in his school was about "ladders leaning against walls," whereas chemistry was about "the structure of the atom and all other interesting stuff, quantum theory, for example" [3]. In his last year of high school, Berry participated in a Westinghouse science competition, and a Westinghouse Talent Fellowship made it possible for him to choose among the best schools for continuing his education. He considered the Massachusetts Institute of Technology, the California Institute of Technology, and Harvard University. His interest in literature and philosophy, beside science, made him opt "R. Stephen Berry and the Berry pseudorotation" is a contribution to the column "Foundation of structural science." for Harvard in 1948. He got his bachelor's degree in 1952, Master's in 1954, and PhD in 1956, all from Harvard. He had a remarkable scientist to mentor his doctoral work, the British (Scottish) William (Bill) Moffitt . Moffitt was educated at Oxford University and did his doctorate under the famous theoretical chemist and applied mathematician Charles A. Coulson who was a pioneer in applying quantum mechanics to valency and other aspects of molecular structure. Moffitt solved a series of problems in the electronic structure of molecules and developed a new concept known as "atoms-in-molecule." He joined Harvard University in 1953 as an assistant professor of chemistry, soon to be promoted to associate professor. Several of the rising stars of the field, such as Roald Hoffmann, wanted to have him as their mentor. Alas, Moffitt's untimely death denied the world of chemistry of a most talented and charismatic leader. He lived his short life to the fullest, and he died on the squash court stressing himself to the limit in spite of previously diagnosed heart problems. After his PhD, Berry stayed at Harvard as a temporary instructor for a year and a half. Then, he worked, as instructor, at the University of Michigan in Ann Arbor, 1957-1960. This was followed by a tenure-track assistant professorship at Yale University, 1960-1964. There was no offer yet for a tenured position at Yale when the University of Chicago offered him one, so he moved there in 1964 and stayed at Chicago for the rest of his life. By the time Berry started his Chicago career, the great science at its James Franck Institute of the post-World War II period had become memory, yet its aura stayed on. Berry had a considerable share in maintaining the high level of science at the University of Chicago. He attacked fundamental questions, such as how a system decides to become a glass or a crystal, or how does a protein fold to the right structure. One of the major areas of his research at the University of Chicago was the formation and behavior of atomic and molecular clusters and especially their dynamics. Our recording took place during a NATO workshop on clusters. Berry was interested in a variety of issues concerning science and public policy and, beside at Chemistry, held an appointment in the School of Public Policy Studies at the University of Chicago. He was a Fellow of the American Academy of Arts and Sciences (1978) Many of the scientific problems that occupied him during the later period of his career could be traced back to his early discoveries in the 1960s regarding intramolecular motion, especially those involving large-amplitude deformation motion, and fluxionality-essentially to what has become known as Berry pseudorotation. Berry pseudorotation Pseudorotation (Fig. 2) appears when identical atoms, with no distinguishing labels on the atoms, permute among nonequivalent sites, and the process looks like a rotation of the molecule. If the atoms are labeled, it is seen that there is permutation as well as rotation. What Berry discovered was the first real example of a large-amplitude pseudorotation that scrambles bonds. The specific discovery happened for the motions of the fluorine ligands in phosphorus pentafluoride. It was in the early days of nuclear magnetic resonance when it was observed that identical atoms in chemically inequivalent sites had different magnetic resonance frequencies. These differences were called chemical shifts. It was shown first by Herbert S. Gutowsky, David W. McCall, and Charles P. Slichter at the University of Illinois [5] that if there was rapid exchange of inequivalent protons, one would see an average signal and the chemically non-equivalent sites could not be distinguished from one another. Further, Gutowsky and Andy Liehr (then, an undergraduate) [6] found a single fluorine frequency for PF 5 . In contrast, PF 5 had been determined by electron diffraction to have a trigonal bipyramid geometry with axial and equatorial P-F bonds whose lengths differed significantly [7]. Berry proposed a mechanism [8] in which the longer axial pair of fluorines bent away from the linear F-P-F line and moved over to form a triangle with one of the equatorial fluorines. Simultaneously, two of the three fluorines in the equatorial plane moved out to become new axial atoms. The net result is as if the PF 5 molecule is rotated by 90° with the polar axis moving from vertical to horizontal. Berry also explained why the NMR spectra showed equivalence of the fluorines. He proposed that the process was fast compared with the observation time of the nuclear magnetic resonance experiment. This was a seminal discovery, but it is possible that Berry himself did not recognize its significance at the time. He described it in a brief section of a longer paper, titled "Correlation of rates of intramolecular tunneling processes, with application to some group V compounds." The thrust of the paper was a discussion of systematic relations for large-amplitude motions, especially tunneling motions [8]. Dynamic processes like the Berry pseudorotation had been proposed for other phenomena before and Berry himself stressed this [9]. Thus, what may have been the first such proposal had been by John Wheeler and Edward Teller in the late 1930s when they attempted to interpret the behavior of the neon-20 nucleus. They described it by an alpha-particle-model of five alphas. In this model, the five alphas take a trigonal bipyramidal configuration, which demonstrates pseudorotation. This explanation eventually proved wrong, but the idea was original and valuable. Berry did not refer to Wheeler and Teller because at the time, he was not aware of their work. He made reference to another, unpublished, proposal concerning the CH 5 + molecular ion. The pseudorotation model did not prove correct for CH 5 + either, because its geometry is not trigonal bipyramidal in the first place. Rather, CH 3 + and H 2 are held together by a weak interaction in it. This was reinforced by recent computational work: The carbocation CH 5 + has a structure of C s symmetry with three 2-center-2-electron bonds and one 3-center-2-electron bond [10]. Another forerunner of the discovery of Berry pseudorotation was Kenneth S. Pitzer's observation of the pseudorotation of cyclopentane [11]. There, smallamplitude motions lead to pseudorotation. There is a nearsymmetry-axis for the non-planar pentagonal molecule, and rotation appears conspicuously. Pitzer called it cyclopentane pseudorotation. In a simplified way the pseudorotation of cyclopentane may be described as follows [12]: imagine one of the five carbons out of the plane of the other four carbons. Then, the out-of-plane carbon exchanges roles with one of its two neighbors (and their hydrogen ligands that always move along with their carbon). This exchange is equivalent to a rotation of this motion by 2π/5 about the near-symmetry-axis perpendicular to the ring (see, e.g., Ref. [13]). This description brings up a favorite topic for Matisse's art of five dancers forming a ring. Now, imagine one of the dancers jump while the other four stay on ground. Then, the next dancer jumps, and so on. The motion rotates, not the dancers. If taking a quick snapshot, it catches one of the dancers in the air and the ensemble has a symmetry plane. If, however, the exposure time is sufficiently long, there will be a blurred image with all the dancers slightly about ground and there will be fivefold symmetry. Berry pseudorotation has become well known, but it has remained unclear when and by whom Berry's name was attached to it, no doubt, deservedly. Berry himself thought of several possibilities [9]. He corresponded with F. Albert Reproduced from Ref. [4] Cotton about the possibility of pseudorotation for Cotton's transition metal carbonyls, so Cotton might have also coined this name. Berry also corresponded with Earl Muetterties about it, and Muetterties was especially interested in the relationship between the lifetime of structures and the reaction times of physical measurements. There was then the possibility that Frank Westheimer may have added Berry's name to pseudorotation. Westheimer found the most far-reaching implications of the process in studying RNA chemistry. He and his students showed that pseudorotation accompanies the hydrolysis of some cyclic phosphate esters [14]. Four-coordinate phosphorus goes into a transition state, then, goes through the pseudorotation as five-coordinate, and finally goes back to four-coordinate. There is analogy here with the PF 5 structure in that the axial positions have weaker bonds than the equatorial positions. Something would come in to form an axial bond and rearrange to form a stronger equatorial bond, and something else would then become axial and break off. The time-scale relationships are important in interpreting the data that different experimental measurements may yield on various structures. There is an apparent paradox that nonidentical nuclei could occupy observably equivalent sites and vice versa, and identical nuclei might occupy non-equivalent sites. According to quantum mechanics, identical electrons or other identical particles have to be indistinguishable and the wave function for identical particles has to reflect this indistinguishability. In contrast, chemistry is based on the distinguishability of different sites in molecules. The answer to this apparent paradox is in time-scale differences in the variability of the relationship between the lifetime of a structure and the observation time of the physical phenomenon being used for its determination. For the cyclopentane molecule, it will be definitely non-symmetrical if being observed with a very quick physical phenomenon, and possessing fivefold symmetry if the measurement is sufficiently slow. Decades after his original discovery, Berry returned to more studies of pseudorotation and its implications. He was increasingly interested also in clusters. Eventually, however, his thinking turned toward the more general question of the intricacies of complex potential surfaces. He was trying to resolve the problem of getting out more information about such systems from computations than could be reasonably digested. He was formulating puzzles that to us resembled those faced by developmental biologists: why do some things form well-defined structures, while others form glasses? Funding Open access funding provided by Budapest University of Technology and Economics. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-07T13:33:37.755Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "6955e6a853e62c211d094a9cade201623b1fb566", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11224-022-02009-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "6cc523c18fa0d6dd7586772713ee099ff0398657", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
251952006
pes2o/s2orc
v3-fos-license
Rational polypharmacological targeting of FLT3, JAK2, ABL, and ERK1 suppresses the adaptive resistance to FLT3 inhibitors in AML Key Points • JAK2, ABL, and MAPK signaling drive adaptive resistance to FLT3 inhibitors.• Polypharmacological targeting of FLT3, JAK2, ABL, and MAPK signaling provides a durable response in AML. Introduction Kinase activating mutations in the FMS-like tyrosine kinase 3 (FLT3) gene represent the most frequent molecular lesion in acute myeloid leukemia (AML). [1][2][3] Approximately one-third of patients with AML harbor internal tandem duplication (ITD), which is associated with poor treatment outcomes and overall survival even after stem cell transplantation. 4,5 In addition, a significant number of patients harbor kinase activating mutations from the activation loop (D835) with an unknown prognosis. 6,7 Several small-molecule FLT3 tyrosine kinase inhibitors (TKIs) have been evaluated in the last 2 decades that resulted in the approval of midostaurin and gilteritinib for the treatment of AML. Despite significant advancements in developing FLT3 inhibitors, treatment failure is common owing to the emergence of resistance. Most patients developed resistance within a short duration even on continued TKI therapy. Both type I (gilteritinib and crenolanib) and type II (quizartinib and sorafenib) FLT3 inhibitors have been evaluated in patients. Regardless of their selectivity and mode of inhibition, resistance emerged. Nonetheless, mechanisms driving resistance partly differ between type I and type II inhibitors. For instance, resistance to type II inhibitors is predominately mediated by the on-target selection of resistant mutations in the FLT3 kinase domain, whereas resistance to type I inhibitors is principally driven by off-target activation of RAS-MAPK, BCR-ABL, and JAK2/JAK1 signaling. 6,[8][9][10] A minority of patients who showed treatment resistance showed the emergence of mutations in genes regulating metabolism and transcription. However, its relevance in conferring resistance has not been evaluated. Besides, like other TKI-treated malignancies, gatekeeper mutation (F691L) conferred resistance to both type I and type II FLT3 inhibitors and poses a significant clinical challenge. It is not clear why on-target-acquired resistance is more frequent with type II inhibitors compared with type I inhibitors in which adaptive resistance to compensatory signaling is more common. Possibly, the stabilization of active kinase conformation by type I inhibitor is, nonetheless, enzymatically in-active, but its nonenzymatic function (signaling scaffold) drives the adaptation to alternate survival pathways. Nevertheless, these studies informed that FLT3 resistant variants could be suppressed by switching to nextgeneration FLT3 inhibitors or dose escalation as demonstrated in managing the TKI resistance in chronic myeloid leukemia (CML). However, unlike CML, resistance conferred by off-target activation of MAPKs, BCR-ABL, and JAKs in AML remains a serious challenge. Besides, FLT3 ITD expressing leukemic stem cells (LSCs) residing in bone marrow (BM), like LSCs in CML, are refractory to TKI inhibitors, which serve as a reservoir to develop resistance. 3,11 Hematopoietic cytokine-mediated activation of JAK2 and MAPK signaling by FLT3 ligand (FL), 12 CXCR4, FGF2, 13 interleukin 3 (IL-3), and granulocyte-macrophage colony-stimulating factor (GM-CSF) 14 have been described in conferring TKI resistance. 14,15 Patients with AML who achieved complete remission lacking FLT3 ITD clones showed better overall survival than patients with minimal residual disease (MRD), suggesting that eradicating the FLT3 ITD clones will have a durable response with better overall survival. The persistence of residual disease serves as a reservoir to develop resistance that eventually leads to disease relapse. Strategies aimed at greater front line disease eradication and suppression of resistance are needed, most of which depends on further research into combination chemotherapy or to develop polypharmacological agents targeting FLT3, its resistant variants, BCR-ABL, JAK2, and MAPKs. To address this, we performed a cell-based screening to identify the small-molecule inhibitor active against FLT3, BCR-ABL, JAK2, and MAPKs. Here we show that pluripotin (SC-1), 16 an inhibitor of RasGAP, and MAPK3, 16 potently inhibits the kinase activity of FLT3, BCR-ABL, and JAK2. Structural modeling studies revealed that it binds with inactive conformations of FLT3, JAK2, and ABL. It is an equipotent inhibitor to both FLT3 ITD and its most vexing resistant variant, the gatekeeper mutant F691L. Expectedly, treatment with pluripotin efficiently suppressed the adaptive resistance conferred by MAPK, BCR-ABL, and JAK2 signaling. As a proof of concept, we provide evidence that the unique polypharmacology of pluripotin, targeting FLT3, BCR-ABL, JAK2, and MAPK efficiently suppressed the leukemic progression in multiple preclinical mouse models and in mice engrafted with primary AML cells. Our preclinical data suggest that upfront targeting of key signaling nodes driving adaptive resistance by polypharmacological agents provides durable response, which is not achieved by currently used FLT3 inhibitors. The future of drug design should focus on developing polypharmacological agents targeting these signaling nodes to achieve a durable response for AML. Apoptosis assay Briefly, 1 × 10 6 cells were seeded into 6-well plates with varying pluripotin concentrations (0-100 nM). Cells were collected 24 hours after drug incubation and stained with annexin V and propidium iodide (PI) for 15 minutes according to the manufacturer's protocol (BD Biosciences, Franklin Lakes, NJ). Flow cytometry analysis was done on FACSAria (BD Biosciences). Results were analyzed using FlowJo software. Isolation of Lin − and CD34 + cells, ex vivo culture, and apoptotic assay Mouse BM from the WT and Rosa21CReERT2;Flt3 ITD /Tet2fl/fl was collected from the femurs and tibias after euthanasia. Lineage negative (Lin − ) cells were purified using magnetic beads (Miltenyi Biotec, North Rhine-Westphalia, Germany) according to the manufacturer's instructions. The isolated cells were plated in 24-well plates in serum-free expansion medium (SFEM) in triplicate containing 50 ng/mL of TPO, SCF, and FLT3LG, and 10 ng/mL IL-3 with 0.1 μM and 1 μM of pluripotin and gilteritinib, respectively after incubation at 37 • C for 6 days. On day 6, cells were stained for annexin V, PI, and cell-surface markers (Sca-1 and c-Kit) as described earlier. Labeled cells were analyzed by flow cytometry (FACSAria, BD Biosciences). Likewise, human CD34 + cells from normal and AML donor were isolated using magnetic beads as described earlier. A hundred thousand CD34 + cells from both normal and leukemic samples were seeded with and without FLT3 inhibitors, described above, in SFEM media reconstituted with 100 ng/mL of SCF, FLT3LG, TPO, and GM-CSF and 10 ng/mL of IL-3 and IL-6. On day 6, cells were harvested and stained for annexin V and PI along with CD34 and CD38 cell-surface markers as described earlier. Labeled cells were analyzed by fluorescenceactivated cell sorter (FACS) and results were analyzed using FlowJo. Five thousand CD34 cells were seeded with and without kinase inhibitors in methocult (STEMCELL Technology) in triplicate. Colonies were enumerated at day 14 and presented as percent relative to vehicle treatment (considered as 100%). In vivo efficacy of pluripotin Human AML cell lines with FLT3 ITD mutations (Molm13 and MV4-11) were transduced to express firefly luciferase and cherry fluorescent proteins for in vivo imaging and tracking the disease progression, respectively. 15,18 Virus production and transduction were performed as described above. One million cherry + cells were injected into the tail veins of immunocompromised NSG-SGM3 (NSGS) mice (Jackson Laboratories, Bar Harbor, ME). We did not use NSG mice for in vivo validation studies as described in previous studies 24 because they lack key human hematopoietic cytokines implicated in conferring resistance to TKI treatment. For instance, hematopoietic cytokines, IL-3, GM-CSF, and FLT3LG were shown to abrogate TKI response in AML by activating JAK2 and RAS-MAPK signaling. 14,15 The triple transgenic NSGS mice expressing human IL-3, GM-CSF, and SCF, although not perfect, closely represent clinical condition. Besides, NSGS mice allow superior engraftment of diverse hematopoietic lineages and primary AML samples than the NSG mice. Two days after the transplantation, mice were intraperitoneally administered with gilteritinib (30 mg/kg daily) or pluripotin (15 mg/kg daily). The leukemic burden was determined weekly using peripheral blood (PB). All experiments involving mice were performed according to the National Institutes of Health Guide for the care and use of laboratory animals and were approved by the Cincinnati Children's Hospital Institutional Animal Care and Use Committee. In vivo efficacy of pluripotin in murine AML model One million BM cells from the Rosa21:CRe ERT2 /Flt3 ITD/ITD / Tet2 fl/fl mice mixed with 1 million BoyJ mice (CD45.1) were transplanted in lethally irradiated recipient BoyJ mice. After 2 weeks of transplantation, Tet2 was deleted by tamoxifen injection after drug treatments as described in the preceding section. Leukemic burdens were monitored weekly using PB by FACS and hemacytometer. In vivo xenograft of AML and treatment Three million primary human AML cells were transplanted in sublethally irradiated NSGS mice. After 2 to 4 weeks, leukemic engraftments were analyzed by determining the levels of human CD45 using FACS. Drug treatments, as described above, were started after 3 weeks of transplantation and continued up to 16 weeks. Leukemic burdens were analyzed weekly by determining the levels of CD45. Statistical analysis Statistical analyses were conducted using Prism software version 9.0 (GraphPad). The median survival was calculated by a log-rank test. For in vitro studies, statistical significance was determined by the 2-tailed unpaired Student t test. A P value of <.05 was considered statistically significant. For all figures, not significant P values, *P ≤ .05, **P ≤ .01, ***P ≤ .001, ****P ≤ .0001. Unless otherwise indicated, all data represent the mean ± standard deviation (SD) from the 3 technical replicates. Pluripotin inhibits FLT3, BCR-ABL, and JAK2 The molecular heterogeneity of AML and the identification of multiple mechanisms of resistance to targeted TKI therapies strongly support the rationale for combinatorial approaches. Although combinatorial treatment strategies have shown effective response in preclinical cancer models, its clinical realization is hampered owing to drugdrug interactions and differing pharmacokinetics. We reasoned that exploiting the polypharmacology of a single drug targeting compensatory survival pathways will be more effective in FLT3targeted AML. To address this, we performed a cell-based screening of MAPK pathway inhibitors having selectivity for FLT3, BCR-ABL, and JAK2 kinases (supplemental Figure 1A,B). We transduced the Ba/F3 cells, a murine IL-3-dependent hematopoietic cell line, with retroviruses expressing MSCV-FLT3 ITD -Ires-cherry, MSCV-BCR-ABL-Ires-venus, and MSCV-Jak2 V617F -Ires-GFP. Ba/F3 cells expressing cherry, GFP, and venus were sorted by FACS and selected for IL-3 independence. Expression of constitutively active kinase renders Ba/F3 cells to IL-3 independent growth. A small-molecule library of MAPK pathway inhibitors was screened at 0 μM to 10 μM of drug concentrations for selective inhibition of FLT3, ABL, JAK2, and MAPK (supplemental Figure 1A,B). This screening identified pluripotin as a potent inhibitor of FLT3, BCR-ABL, and JAK2 ( Figure 1A). It effectively suppressed the proliferation of BaF3-FLT3 WT (IC 50 , 7 nM), BaF3-FLT3 ITD (IC 50 , 8 nM), BaF3-BCR-ABL (IC 50 , 12 nM), and BaF3-Jak2 V617F (IC 50 , 25 nM), whereas IC 50 for the parental BaF3 is 660 nM, demonstrating a significant therapeutic window. Expectedly, both gilteritinib and quizartinib inhibited the proliferation of BaF3-FLT3 ITD but lacked activity against the Ba/F3 cells expressing BCR-ABL and JAK2 V617F (supplemental Figure 1C-D). To determine whether the inhibitory effect of pluripotin on cell proliferation is mediated through the on-target inhibition of oncogenic kinase signaling, FLT3, ABL, and Jak2 autophosphorylation as well as phosphorylation of STAT5 and ERK1/2 were evaluated by immunoblotting. Treatment with pluripotin abolished the phosphorylation of autophosphorylated tyrosine residues from the activation loop of FLT3 ITD (Y842), BCR-ABL (Y386), and Jak2 V617F (Y1002) kinases ( Figure 1B). Likewise, treatment with pluripotin inhibited the phosphorylation of STAT5 and ERK1/2 ( Figure 1B). Consequently, on-target inhibition of oncogenic kinase signaling resulted in apoptotic cell death determined by annexin V staining ( Figure 1C). Altogether, these data demonstrate that pluripotin selectively kills the Ba/F3 cells expressing FLT3 ITD , BCR-ABL, and JAK2 V617F by blocking their kinase activity and downstream signaling. Pluripotin abolishes the adaptive resistance conferred by RAS-MAPK, BCR-ABL, and JAK2 signaling Resistance to type II inhibitors is predominately mediated by ontarget resistant mutations from the activation loop and the gatekeeper residues, whereas adaptive resistance is more frequently observed with type I inhibitors. Activation of compensatory survival pathways, such as RAS-MAPK, BCR-ABL, and JAK2, drive adaptive resistance. To determine the effect of these genes in conferring resistance, Ba/F3 cells expressing Ras mutants, BCR-ABL, and JAK2 V617F alone or with FLT3 ITD were created. Parental BaF3 and BAF3-FLT3 ITD cells were transduced with retroviruses expressing NRAS G12V -Ires-GFP, NRAS Q61K -Ires-GFP, BCR-ABL-Ires-GFP, and JAK2 V617F -Ires-GFP. These cell lines were subjected to dose-dependent cell proliferation assays to determine the IC 50 values. As expected, BaF3-FLT3 ITD cells coexpressing Ras G12V , BCR-ABL, and JAK2 V617F conferred strong resistance to gilteritinib ( Figure 2A; supplemental Figure 2A). Likewise, treatment with trametinib alone failed to suppress the resistance conferred by the RAS mutants or BCR-ABL and JAK2 V617F in FLT3 ITDexpressing cells (supplemental Figure 2B). As reported earlier, 8 we noted a combination of trametinib with gilteritinib suppressed the adaptive resistance conferred by RAS mutants but lacked the therapeutic window (data not shown). To assess the hypothesis that a MAPK inhibitor targeting FLT3, BCR-ABL, and JAK2 would abrogate the adaptive resistance to gilteritinib, we next treated the BaF3-FLT3 ITD cells coexpressing RAS variants, BCR-ABL, and JAK2 with increasing pluripotin concentrations. As envisioned, treatment with pluripotin potently suppressed the resistance conferred by FLT3 ITD cooperative mutations, RAS G12V , RAS Q61K , BCR-ABL, and JAK2 V617F ( Figure 2B; supplemental Figure 2C). A comparative analysis of pFLT3, pERK1/2, and pSTAT5 in BaF3-FLT3 ITD and BaF3-FLT3 ITD /RAS GA12V with gilteritinib and pluripotin inhibition revealed that pluripotin is equally active in suppressing pFLT3 and pSTAT5 in both cell lines, whereas efficacy of gilteritinib was reduced in BaF3-FLT3 ITD cells expressing RAS G12V (supplemental Figure 2C). Likewise, human AML cells (MOLM13 and MV4-11) expressing RAS G12V and RAS Q61K conferred resistance to gilteritinib, whereas treatment with pluripotin fully suppressed the cell proliferation with a concomitant reduction in phosphorylation of FLT3 and its substrates, STAT5 and ERK1/2 ( Figure 2D-F). Next, we evaluated the efficacy of pluripotin on non-FLT3-mutated cell lines harboring mutations in MAPK pathways and transcription factors (both point mutations and genefusions, supplemental Table 1). Pluripotin inhibited the proliferation of most non-FLT3-mutated AML cell lines except the ME1, THP1, and MO7-E. Pluripotin sensitivity corelates with the concomitant inhibition of pSTAT5 and pERK1/2, whereas resistant cell lines (ME1, THP1, and MO7-E) show persistent pSTAT5 and pERK levels (supplemental Figure 2F). To test whether the BCL2 inhibitor, venetoclax, can Figure 2G). As noted earlier, MOLM13 and non-FLT3-mutated AML cells do not show synergistic response to the combination of venetoclax plus giltertinib. 25 Strikingly, venetoclax in combination with pluripotin exhibited synergistic response in both FLT3-mutated and non-FLT3-mutated cells, except for the MOLM13 and MO7-E cells (supplemental Figure 2G). Together, these data provide evidence that the unique polypharmacology of pluripotin effectively suppresses the adaptive resistance observed with treatment with gilteritinib, and perhaps combinatorial targeting of Bcl2 will impart greater efficacy in combating the resistance. Molecular docking studies reveal that pluripotin binds with inactive conformations of FLT3, ABL, and JAK2 Next, we performed molecular docking studies of pluripotin to understand the structural basis of FLT3, ABL, and JAK2 inhibitions. We used both the inactive and active conformations of FLT3, ABL, and JAK2 for docking studies. Our in silico analysis failed to predict the binding of pluripotin at the ATP site when the active conformations were used for docking. Our docking analysis predicts that pluripotin binds to a closed and enzymatically inactive conformation of FLT3, ABL, and JAK2 in which phenylalanine residue of the DFG motif is displaced to accommodate trifluoro phenyl ring ( Figure 3A-F). Pluripotin anchors to the ATP site by coordinating with residues from the kinasehinge (FLT3-Cys 694, ABL-Met 318, and Jak2-Leu 932), helix-C (FLT3-Glu 661, ABL-Glu 286, and Jak2-Glu 898), and catalytic HRD motif (FLT3-Asp 829, ABL-Asp 381, and Jak2-Asp 994) using hydrogen bonds ( Figure 3G-I). Active kinase conformation targeted by the type I inhibitors is incompatible for pluripotin binding owing to steric clash with phenylalanine residue of DFG motif adopted in DFG-out conformation ( Figure 3J). Likewise, the phenylalanine residue of DFG motifs from the active conformations of ABL and JAK2 kinases rendered steric hindrance to pluripotin binding (not shown). Interestingly, similar to gilteritinib and quizartinib, pluripotin is not in close contact with gatekeeper residue F691. These analyses suggest that mutations activating the kinase enzymatic activity (the activation loop mutants would stabilize the active kinase conformation) will confer resistance to pluripotin, whereas the gatekeeper mutations will be susceptible to inhibition. This analysis also revealed that unlike FLT3, the gatekeeper mutants of ABL will confer resistance to pluripotin because of its close proximity with the gatekeeper residue, threonine 315. Pluripotin effectively suppresses the FLT3 gatekeeper mutant To validate our models and the on-target inhibition of FLT3, ABL, and JAK2 by pluripotin, cell proliferation assays with pluripotin were performed using Ba/F3 cells expressing mutant kinases. If our model is correct, we expect that the FLT3-activation loop mutants will confer resistance as resistance to type II FLT3 inhibitors, quizartinib and sorafenib, as reported previously. The gatekeeper F691L mutant conferred cross-resistance to both type II (quizartinib and sorafenib) and type I (gilteritinib and crenolanib) inhibitors, likely causing a steric blockade to inhibitor binding. In contrast, our model suggests that pluripotin binding is unaffected by the substituted leucine at the gatekeeper residue ( Figure 3K). Consistent with modeling predictions, activation loop mutants conferred resistance to pluripotin whereas gatekeeper mutant, F691L, was susceptible to inhibition ( Figure 3L; supplemental Figure 3). The active site mutant of Jak2 L983F conferred crossresistance to most Jak2 inhibitors. 18 Our modeling studies predict that a phenylalanine substitution for Leu 893 will cause a direct steric hindrance to pluripotin binding ( Figure 3M). As envisioned, Ba/F3 cells expressing Jak2 V617F/L983F conferred resistance to pluripotin ( Figure 3N). Likewise, modeling studies with ABL predicted that the gatekeeper mutant, T315I, will block pluripotin binding ( Figure 3O). Accordingly, the expression of BCR-ABL T315I conferred resistance to pluripotin ( Figure 3P). Altogether, these data clearly demonstrate that pluripotin is a type II inhibitor of FLT3, ABL, and Jak2. Although the activation loop mutants of FLT3 conferred resistance to pluripotin, dose escalation will suppress them, as there is a significant therapeutic window available compared with native Ba/F3 cells (Figure 3). Pluripotin effectively suppresses murine AML Because hematopoietic cytokines, IL-3, GM-CSF, and FLT3LG have been shown to abrogate TKI response in AML by activating JAK2 and RAS-MAPK signaling, 14,15 we used triple transgenic NSGS mice (expressing human IL-3, GM-CSF, and SCF) for accurate in vivo modeling to determine the effect of hematopoietic cytokines in driving the TKI response. 11 Prior drug validation studies using NSG mice, nonetheless, showed in vivo efficacy of gilteritinib and quizartinib. However, their antileukemic activity in NSGS mice is lost owing to cytokine-driven resistance. We reasoned that NSGS mice represent more closely the clinical settings for therapeutic validation studies as they express resistant-conferring cytokines. 1 million human AML cells, MOLM13, MOLM13RAS G12V , and MV4-11 and MV4-11RAS G12Vexpressing luciferase and cherry were transplanted into the NSGS mice by injection through the tail vein. Drug treatment (gilteritinib 30 mg/kg and pluripotin 15 mg/kg) was started after 3 days of transplantation. As expected, treatment with gilteritinib did not show any change in suppression of disease progression compared with vehicle-treated mice (Figure 4; supplemental Figure 4). In contrast, treatment with pluripotin extended survival by 8 to 12 days compared with gilteritinib-treated mice (Figure 4; supplemental Figure 4). Likewise, the leukemic burden from the PB and BM, determined by cherry + cells from deceased or sacked mice, showed a significant reduction only in pluripotin-treated mice. hematocrit (HCT) (F), and platelets (G) did not show any adverse effect on hematopoiesis suggesting that pluripotin will be safe and effective. Presented data are from mice developed progressive disease from week 11 and eventually succumbed to leukemia by week 16 (Figure 5B-D). In contrast, mice treated with pluripotin (15 mg/kg daily) showed stable disease, and~80% of mice survived ( Figure 5B,C). Although, treatment with pluripotin efficiently suppressed the leukemic progression (normal WBC levels and reduced leukemic burden), it is not curative, as treatment discontinuation resulted in leukemic progression after week 16. Similar to gilteritinib, treatment with pluripotin had no significant impact on other blood parameters such as red blood cells, platelet numbers, and hematocrits ( Figure 5E-G). Altogether, these data clearly show that pluripotin efficiently suppresses the adaptive resistance to FLT3 TKIs mediated by compensatory survival signaling. Pluripotin effectively suppresses human AML To determine the clinical potential of pluripotin in AML, we performed a comparative cell proliferation assay with increasing concentrations of pluripotin and gilteritinib. Given that the hematopoietic cytokines FL, GM-CSF, and IL-3 abrogate TKI response, we performed cell proliferation assays using a mixture of 3 cytokines (SCF, TPO, and IL-6). We also performed cell proliferation assays with cytokine composition (FL, GM-CSF, IL-3, IL-6, TPO, and SCF) used to propagate both normal hematopoietic stem cells (HSCs) and leukemic cells. Three FLT3 ITD+ and 3 non-FLT3-mutated primary AML samples were evaluated (supplemental Figure 5A). Expectedly, gilteritinib efficiently inhibited the proliferation of FLT3 ITD AML cells (IC 50 , 0.6 nM) when assayed in the absence of resistant-conferring cytokines (supplemental Figure 5B). However, gilteritinib was ineffective on leukemic cells harboring WT FLT3 with additional AML-associated mutations. Likewise, the addition of hematopoietic cytokines (FLT3LG, GM-CSF, and IL-3) abrogated the gilteritinib response (supplemental Figure 5C). In contrast, pluripotin inhibited the growth of FLT3 ITD AML cells and effectively suppressed the growth factor-mediated resistance suggesting that it will be more effective in vivo (supplemental Figure 5D,E). Interestingly, AML4 (FLT3 ITD+ harboring mutations in RAS and PTPN11) that showed primary resistance to gilteritinib is effectively suppressed by pluripotin in both cytokine mixtures (supplemental Figure 5D,E). Next, we evaluated the efficacy of pluripotin in NSGS mouse xenografts transplanted with AML1 (FLT3 ITD ) and AML4 (FLT3 ITD + NRAS G13D + PTPN11 D61Y ) ( Figure 6A). Treatment with gilteritinib showed a modest reduction in leukemic burden in mice transplanted with AML1, whereas mice transplanted with AML4 did not respond to treatment. Consequently, treatment with gilteritinib did not show any significant change in survival compared with vehicle-treated mice ( Figure 6B-E). Treatment with pluripotin showed a substantial reduction in leukemic burden and extended the survival for 40 and 20 days in mice transplanted with AML 1 and AML 4, respectively ( Figure 6B-E). Altogether, these data provide evidence that pluripotin is an effective FLT3 inhibitor and suppresses the adaptive resistance conferred by RAS-MAPK mutations or activation of ABL and JAK2 signaling. Pluripotin selectively induced apoptosis in leukemic progenitors A recent study reported pluripotin supports the ex vivo expansion of normal HSC/progenitor cells, 27 which prompted us to determine its selectivity for normal and LSCs. To test this, Lin − cells from the normal and leukemic mice were isolated and grown for 6 days with pluripotin (0.1 μM and 1 μM) and gilteritinib (1 μM) in SFEM media. On day 6, cells were stained with annexin V and stem cell-surface markers (Lin − , Sca1 + , and Kit + , collectively called LSK cells) to determine the differentiation and extent of apoptosis in stem progenitor cells. As reported by Turan et al, 27 treatment with 1 μM of pluripotin blocked the differentiation of Lin − cells from both leukemic and normal samples but showed the enrichment of LSK cells only from the normal progenitors. In comparison, cells treated with 0.1 μM of pluripotin were ineffective in stopping the differentiation ( Figure 7A; supplemental Figure 6A). In contrast, treatment with gilteritinib induced the differentiation of both normal and leukemic progenitors; however, differentiation of leukemic progenitors was significantly higher (~2-fold). Likewise, treatment with gilteritinib showed higher annexin V staining of leukemic cells than normal progenitors ( Figure 7A-D). Importantly, it revealed that gilteritinib is minimally active against Lin − leukemic progenitors, although it potently induces apoptosis in differentiated Lin + cells ( Figure 7A-D). In contrast, treatment with pluripotin induced apoptosis in both Lin − and Lin + cells but with a 6-fold greater selectivity for leukemic progenitors ( Figure 7A-D; supplemental Figure 6A-B) that resulted in enrichment of stem/progenitor cells (LSK) only from the normal progenitors (supplemental Figure 6A). We observed that a small population of LSK cells from the pluripotin-treated leukemic progenitors are resistant to treatment, thus phenocopying the in vivo finding in which a small population of cells remain suppressed (cytostatic) during the treatment, which on treatment discontinuation resulted in disease relapse. Likewise, CD34 + cells, the human equivalent of mouse Lin − cells, were isolated and analyzed from the individuals with and without AML. Treatment with pluripotin showed an enhanced apoptotic cell death of leukemic progenitors compared with gilteritinib, though enriching the normal HSC progenitors determined by CD34 + / CD38 − cell-surface markers ( Figure 7E; supplemental Figure 6C). Furthermore, in the colony-forming unit assay (a functional progenitor assay),treatment with pluripotin showed significant protection of normal colony-forming units though efficiently suppressing the emergence of leukemic colonies compared with gilteritinib ( Figure 7F). Altogether, these data provide evidence that pluripotin selectively targets leukemic progenitors, though sparing the normal HSC growth. Discussion AML is a complex and heterogeneous disease, thus requiring complex therapeutic approaches for effective treatment outcomes. Although targeting multiple lesions using a combinatorial approach is successful in some cases, its utility is limited owing to drug-drug interaction and differing pharmacokinetics. Polypharmacological inhibition of multiple targets using a single agent may overcome the clinical challenges associated with AML treatment. Here, we provide evidence that polypharmacological inhibition of survival pathways associated with acquired and adaptive resistance in AML by pluripotin provides effective treatment outcomes. Kinase inhibitors are classified as type I and type II inhibitors based on their binding to the ATP site. 28,29 Inhibitors binding to an inactive conformation are classified as type II inhibitors, although those preferably targeting the active conformation but also binding to inactive conformations are called type I inhibitors. 28,30 Both type I (gilteritinib) and type II (quizartinib) inhibitors have been evaluated in AML and they induce superior therapeutic response than the conventional chemotherapy; however, most patients relapsed within months despite continued therapy. Mechanisms driving resistance are partly different for type I and type II inhibitors. For instance, patients treated with quizartinib or sorafenib (type II inhibitors) are prone to select on-target resistant mutations in FLT3 kinase, 31,32 although resistance to type I inhibitors (gilteritinib or crenolanib) are mostly driven by the activation of alternate survival signaling (RAS/MAPK, JAK2, and BCR-ABL) because they are equally active to both FLT3 ITD and FLT3 TKD mutants and are less prone to resistance owing to on-target mutations. Mutations activating the kinase enzymatic activity or destabilizing the inactive conformation are common in patients treated with type II inhibitors. 10,17,19 Consequently, the emergence of on-target resistant mutations is more frequent with type II inhibitors because every kinase can adopt multiple inactive conformations. 17,19 Therefore, numerous resistance-conferring mutations may emerge to destabilize each inactive conformation, which by default would stabilize the active conformation resulting in enzymatic activation. Therefore, resistant variants selected against type II inhibitors are sensitive to type I inhibitors with few exceptions (gatekeeper variants). Our docking predictions revealed that pluripotin binds to the inactive conformation of FLT3, and its interaction with gatekeeper residue, F691, is minimal. Consequently, it potently inhibits the gatekeeper mutant, F619L, but has reduced activity against the resistant mutants from the activation loop. Nonetheless, given the existing therapeutic index compared with normal Ba/F3 cells, dose increment will be able to suppress the activation loop variants (Figure 3). To test this, we performed in vitro screening using randomly mutagenized FLT3 ITD to select for resistant variants, as reported earlier. 17 We failed to recover resistant clones at 50 nM of Figure 5A). Note, pluripotin treatment with pluripotin significantly prolonged the survival, whereas gilteritinib was ineffective. Shown are leukemic burden measured by hCD45 levels in mice recipients of AML1 (D) and AML4 (E). Presented data are from 2 independent experiments (3 mice per group) shown as mean ± SD. *P < .05, **P < .01, and ***P < .001. ns, not significant. pluripotin, 5-fold higher than the IC 50 value (data not shown), whereas quizartinib resistant screening recovered previouslyreported mutations from the activation loop and the gatekeeper residue. 10 These data support the notion that pluripotin alone will be able to suppress the emergence of on-target resistant mutations at higher concentrations yet retaining a significant therapeutic index. BCL2 inhibitor, venetoclax, in combination with hypomethylating agents was approved for AML treatment as a front line therapy in older adults. 33 However, the efficacy of venetocalx in patients with AML harboring FLT3 mutations thwarted by the expression of MCL1 owing to constitutive FLT3 kinase activity. 25,34 Thus, inhibition of FLT3 ITD with TKIs restored venetoclax sensitivity as elegantly demonstrated by Zhu et al. 25 Importantly, they also noted that the combination of gilteritinib plus venetoclax exerts a synergistic response only in FLT3-mutated AML cells, although non-FLT3mutated cells were unaffected by this combination. Interestingly, the efficacy of pluripotin in inducing a synergistic response with venetoclax in FLT3-and non-FLT3-mutated AMLs possibly by simultaneously inhibiting the RAS-MAPK and JAK-STAT pathways, highlights the promise of pluripotin in combination with Bcl2/Mcl-1 inhibitors in treating hematologic cancers. FLT3 TKIs show differential responses to leukemic cells. For instance, FLT3 inhibitors induce apoptosis in circulating blasts but promote differentiation of leukemic blasts from the BM. 3 This suggests that the BM niche is a potential source of intrinsic resistance, resulting in leukemic persistence and MRD. These surviving leukemic cells serve as a reservoir to develop resistance. 3,11 Hematopoietic cytokines and growth factors (FL and FGF2) 12,13 released from the BM stroma/niche abrogate oncogene-dependence resulting in TKI resistance. 11 Notably, FL, whose expression is reported to increase during chemotherapy, abrogates TKI response by activating canonical FLT3 WT signaling. 12,35 However, it is not clear how FLT3 WT signaling activation confers resistance. Nonetheless, an altered signaling network engaged by FLT3 WT supporting survival during TKI therapy by activating cell-intrinsic inflammatory pathways has been implicated. 36,37 Likewise, inflammatory cytokine-induced activation of JAK2 signaling with high levels of pJAK2 is associated with adverse clinical outcomes in AML and therapeutic resistance. Therefore, concomitant targeting of FLT3 and cytokine signaling using JAK2 inhibitor (ruxolitinib) showed an improved response in preclinical mouse models of AML. 14,15 Recent studies identified the activation of Ras/MAPK pathways most commonly owing to NRAS or KRAS mutations and less frequently by FLT3 gatekeeper mutations or BCR-ABL1 fusions in driving AML progression. 8 We show that polypharmacological targeting of FLT3, ABL, MAPK, and JAK2 by pluripotin effectively suppressed the adaptive resistance, but it is not curative. Treatment withdrawal resulted in disease relapse suggesting the persistence of LSCs during treatment. TKI therapy alone is not sufficient to eliminate LSCs as diverse survival signaling and a higher apoptotic threshold like normal HSCs support their persistence. 11 As reported earlier, treatment with gilteritinib induced rapid differentiation of both normal and leukemic Lin − cells 38 though differentiation of leukemic progenitors was significantly higher (~2-fold) than normal Lin − cells. In contrast, treatment with pluripotin prevented the differentiation of normal HSC progenitors more strongly than the leukemia cells. Interestingly, unlike gilteritinib, treatment with pluripotin induced apoptosis in Lin − leukemic cells, which provides an explanation for its superior in vivo response. As reported recently by Turan et al, 27 we observed the enrichment of normal HSC progenitors with treatment with pluripotin determined by LSK cellsurface markers in mice and CD34 + /CD38 − for human cells. (supplemental Figure 6). In contrast, pluripotin induced apoptosis in leukemic progenitors in both LSK and CD34 + cells; however, a small fraction of LSK and CD34 + /CD38 − cells persisted. These persistent cells are intrinsically resistant to TKI treatment, which seemingly constitute MRD in vivo and drive disease relapse after treatment discontinuation. This is in stark contrast to treatment with gilteritinib, which lacks activity in BM-residing leukemic cells in NSGS mice (Figures 4 and 5; supplemental Figure 4). We and others have reported that hematopoietic cytokines abrogate TKI response; therefore, BM-resident leukemic cells do not undergo apoptosis when targeted with FLT3-selective inhibitors. 11,14,15 Failure to entirely suppress the MRD and BM-resident leukemic cells are the reasons short remissions with currently used FLT3 TKIs, is inevitable, as disease relapse even on continued treatment. The efficacy of pluripotin in eradicating the bulk of leukemia and suppressing the LSCs in BM in our preclinical models provide evidence that it will exert a durable response like treatment with TKI in CML, which, so far, is not possible with currently used FLT3 TKIs. However, like CML, LSCs will persist, whose eradication is necessary for curative response. In conclusion, this study demonstrates that pluripotin has several advantages over currently used FLT3 TKIs. Poor efficacy of FLT3 inhibitors in AML is associated with emergence of acquired and adaptive resistance. We demonstrate that pluripotin is a type II inhibitor of FLT3, ABL, JAK2, and RasGAP. It effectively suppressed the resistance conferred by the gatekeeper mutant, F691L. Likewise, it potently suppressed the resistance mediated by the activation of RAS-MAPK, BCR-ABL, and JAK2 signaling in multiple preclinical mouse models of AML. Given that AML is common in older patients, treatment outcome continuously declines with increasing age. Besides, many patients cannot tolerate intensive chemotherapy. We anticipate that the polypharmacological targeting of relevant survival mechanisms in AML using a single agent will be more effective in treating older patients. Altogether, as a proof of concept, we provide evidence that polypharmacological targeting of multiple survival pathways by pluripotin alone induced a durable response by simultaneously suppressing both acquired and adaptive resistance. Besides, as the first example of polypharmacological type II kinase inhibitor, pluripotin may significantly impact future drug design and development of an effective treatment strategy for AML.
2022-09-01T06:17:54.274Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "effc891f91ea4eb721c70d2e478e80729a5fa31c", "oa_license": "CCBYNCND", "oa_url": "https://ashpublications.org/bloodadvances/article-pdf/doi/10.1182/bloodadvances.2022007486/1918466/bloodadvances.2022007486.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02e88d93c9e3f87eaa6828b0caf843a2d5c80253", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
209500081
pes2o/s2orc
v3-fos-license
Low Expression of Single-stranded DNA Binding Protein 2 (SSBP2) Predicts Unfavourable Postoperative Outcomes in Patients With Clear Cell Renal Cell Carcinoma Background: Single-stranded DNA binding protein 2 (SSBP2) is a subunit of a single-stranded DNA binding complex, which is involved in the maintenance of hematopoietic stem cells and stress responses. Numerous studies have suggested that SSBP2 functions as a tumor suppressor and is silenced through a pathway mediated by promoter hypermethylation. However, the role of SSBP2 in human renal cell carcinoma has not been reported, to date. Herein, we investigated the clinicopathological significance of SSBP2 expression in clear cell renal cell carcinoma (ccRCC). Materials and Methods: We constructed tissue micro arrays consisting of 173 ccRCC tissues, and SSBP2 expression was evaluated semi-quantitatively based on the staining intensity and the proportion of stained cells. Regarding statistical analysis, the tissues were divided into two groups according to SSBP2 expression, and correlation of SSBP2 expression with various clinicopathological characteristics and patient outcomes was evaluated. Results: Low SSBP2 expression was observed in 114 of 175 (65.9%) of ccRCC cases, and low SSBP2 expression was significantly correlated with larger tumor size (p=0.005, Chi-square test), higher WHO/ISUP histological grade (p<0.001, Chi-square test), tumor necrosis (p=0.008, Chi-square test), sarcomatoid change (p=0.021, Chi-square test), and higher pT AJCC stage (p=0.002, Chi-square test). Kaplan-Meier survival curves revealed that patients with low SSBP2 expression had worse recurrence-free survival (p=0.041, log-rank test). Conclusion: ccRCC with low SSBP2 expression was associated with adverse clinicopathological characteristics and poor patient outcomes. Renal cell carcinoma (RCC) is the most common malignancy in the kidney and is the ninth most common cancer in both men and women in Korea (1). Clear cell renal cell carcinoma (ccRCC) is a histological subtype of RCC that accounts for about 80% of all RCC cases (2). Although nephrectomy can cure most localized ccRCCs, distant or local recurrence occurs in 20-30% of patients within 5 years after curative surgery (3,4). Recent proteomic analyses revealed numerous dysregulated proteins and cancer-related signalling pathways in RCC, which are potential diagnostic and prognostic biomarkers and molecular targets for treatment (5,6). However, due to the molecular phenotype heterogeneity of RCC, there is no well-established molecular biomarker for prognosis (7,8). Therefore, prognostic biomarkers for ccRCC are greatly needed. Single-stranded DNA binding protein 2 (SSBP2) was isolated as a tumor suppressor of myeloid leukemia and is located in a critical region of loss in chromosome 5q14.1 (9). SSBP2 is a subunit of a ssDNA-binding complex that is involved in the maintenance of hematopoietic stem cells and stress responses (10). SSBPs interact with the transcriptional adaptor protein Lim domain-binding protein 1 (LDB1) through a highly conserved amino terminal motif (11); LDB1 binds to the LIM domains of LIM only proteins (LMO) and LIM homeodomain proteins (LHX) through a carboxyterminal LIM interacting domain (12). Although precise levels of LMO, LHX, and LIM-binding proteins are known to be critical for many developmental programs, accumulating evidence suggests that these complexes are also key molecules in various human cancers (13). The role of SSBP2 in human malignancies is an area of active investigation, and many studies have suggested that SSBP2 functions as a tumour suppressor and is silenced through a pathway mediated by promoter hypermethylation (14)(15)(16)(17)(18). However, among glioblastoma patients, a SSBP2 variant was associated with poor survival (19), and the role of SSBP2 in human RCC has not yet been reported (20). To determine the clinical role of SSBP2 in human ccRCC, we investigated the expression of SSBP2 in ccRCC tissues by immunohistochemistry. The association of SSBP2 expression with various clinicopathological characteristics was assessed as well as whether SSBP2 is a prognostic factor for patient survival. Materials and Methods Patients and tumour samples. We enrolled a consecutive series of 252 patients with RCC in this study. All cases were diagnosed and underwent surgery at Hanyang University Hospital (Seoul, Korea) between 2006 and 2015. Patients who were diagnosed with non-ccRCC or had incomplete clinical follow-up data or unavailable paraffin blocks were excluded, leaving 173 patients. The median follow-up period was 80 months (range, 6-141 months). Patients were divided into Histologic grades 1 to 4 according to the World Health Organization/International Society of Urological Pathology (WHO/ISUP) grading system. Pathologic stage was determined according to the 8 th edition of the American Joint Committee on Cancer (AJCC) staging system. We reviewed all hematoxylin and eosin (H&E)-stained slides, pathology reports, and other medical records to confirm the diagnoses. The clinicopathologic parameters assessed were tumour size, histologic grade (WHO/ISUP grading system), lymphovascular invasion (renal vein tumour thrombus was included), sinus fat invasion, perirenal soft tissue involvement, tumour necrosis, sarcomatoid change, and pT AJCC stage. This study was approved by the Institutional Review Board of Hanyang University Hospital (HYUH 2018-05-005), and the requirement for informed consent was waived. Tissue microarray construction. We used a manual tissue microarrayer (Unitma, Seoul, Korea) to construct the tissue microarray (TMA) from archival formalin-fixed, paraffin-embedded tissue blocks. Non-necrotic tissues that were most representative of the centre of the carcinoma, spanning 0.5 cm or larger, were selected from the H&E-stained sections under light microscopy. Tissue cylinders (2-mm in diameter) were punched from a previously marked lesion of each donor block and transferred to the recipient block (Unitma). Each TMA was comprised of 5×10 samples. Immunohistochemical staining. The immunohistochemical staining for SSBP2 was performed with 4-µm-thick sections from the TMA blocks. The sections were deparaffinized in xylene and then rehydrated through graded ethanol. For antigen retrieval, the sections were heated in sodium citrate buffer (pH 6.0) in an autoclave at 100˚C for 20 min. Endogenous peroxidase activity was blocked with peroxidase blocking solution (S2023; Dako, Glostrup, Denmark). The TMA slides were incubated with primary antibodies at 4˚C overnight and then incubated with a labelled polymer (EnVision/HRP, K5007; Dako) for 30 min at room temperature. The primary antibody was a rabbit monoclonal anti-SSBP2 antibody (EPR11520, 1:100 dilution). The antigen detected by the monoclonal SSBP2 antibody (ab177944, Abcam, Cambridge, MA, USA) was reported to be a synthetic peptide 300 aa from the C-terminus (Cysteine residue). Finally, 3, 3'-diaminobenzidine tetrahydrochloride was used as a chromogen for detection, and the tissues were counterstained with Mayer's hematoxylin. Interpretation of immunohistochemical staining. Nuclear staining of the tumour cells was assessed using the H-score method (staining intensity×percentage of positive cells for each intensity score). Staining intensity was graded as follows: none=0, weak=1, moderate=2, and strong=3. Representative micrographs are shown in Figure 1. ROC curve analysis was performed to determine the cut-off score for low SSBP2 expression for survival endpoints (21). Expression below the diagnostic cut-off, i.e. an H-score<100, was defined as low SSBP2 expression. Statistical analysis. The statistical analysis was performed using SPSS software, version 21 (IBM, Armonk, NY, USA). The chisquare test was used to evaluate the correlations between SSBP2 expression and the clinicopathologic parameters of tumour size, WHO/ISUP grade, lymphovascular invasion, sinus fat invasion, perirenal soft tissue involvement, tumour necrosis, sarcomatoid change, and pT AJCC stage. Recurrence-free survival and cancerspecific survival were determined using Kaplan-Meier survival curves, and the log-rank test was used to compare the differences. A p-value less than 0.05 was considered statistically significant. Correlations between SSBP2 expression and patient outcomes. Kaplan-Meier survival curves showed that patients with low SSBP2 expression had worse recurrence-free survival (p=0.041, log-rank test) (Figure 2A). There also was a tendency toward worse cancer-specific survival for patients with negative SSBP2 expression, but it was not statistically significant (p=0.061, log-rank test) ( Figure 2B). In multivariate survival analyses, a loss of SSBP2 expression was not an independent prognostic factor for recurrence-free or cancerspecific survival (data not shown). Discussion SSBP2 is a well-known tumour suppressor in acute myelogenous leukaemia. However, the role of SSBP2 as a tumour promoter or suppressor in various human malignancies is controversial (14). Most studies in many human cancers, such as prostate, oesophageal, ovarian, and gallbladder cancers, have reported SSBP2 as a tumour suppressor that is silenced by promoter hypermethylation. Liu et al. detected SSBP2 hypermethylation in 61.4% (54/88) of prostate cancers and 0% (0/23) of benign prostatic hyperplasias. Hypermethylation of SSBP2 was associated with higher stage, and in a colony formation assay, SSBP2 expression inhibited tumour cell proliferation and induced cell cycle arrest (15). Huang et al. (16) reported that promoter methylation and down-regulation of SSBP2 were frequently detected in squamous cell carcinomas of the oesophagus and suggested that SSBP2 functions as a tumour suppressor that acts by inhibiting the Wnt signalling pathway. Brait et al. (17) detected promoter methylation at 13 genes, including SSBP2, in ovarian cancer. Although hypermethylation of SSBP2 was observed in 9% (3 of 33 cases) of ovarian cancers, it was not statistically significant. Tsukamoto et al. (18) found that methylation of the SSBP2 promoter was more frequently in gallbladder cancer than in cholecystitis. In addition, the oncogenic role of SSBP2 as a tumour promoter has also been suggested in glioblastoma. Using genotyping, Xiao et al. (19) identified a singlenucleotide polymorphism (Rs7732320), located in the intronic region of SSBP2, with prognostic significance. They investigated whether patient outcome was correlated with the transcript levels of SSBP2 in 619 glioblastoma patients (from 3 publicly available gene expression data sets) (22)(23)(24). There was a strong and significant association between SSBP2 gene expression and poor overall survival in glioblastoma patients (19). In this study, we observed low SSBP2 expression in 65.9% of ccRCC tissues and showed that SSBP2 loss was significantly associated with aggressive phenotypes, including larger tumour size, higher WHO/ISUP histologic grade, tumour necrosis, sarcomatoid change, higher pT AJCC stage, and worse recurrence-free survival. To date, there have been no studies on SSBP2 expression in RCC. Dormoy et al. reported that the developmental marker Lim1 functions as an oncogene in ccRCC cells and suggested targeting Lim1 as an innovative therapeutic intervention for human ccRCC (20). SSBP2 and Lim1 are two of the various factors involved in regulating the transcriptional activity of LIM-homeodomain proteins, and their interactions are important in development (11,25). Further molecular investigations are needed to provide a plausible mechanism for their function in oncogenesis. As for the strength of this study, this is the only study identifying the significance of SSBP2 according to protein expression levels in ccRCC. Furthermore, we correlated in vivo 34: 101-107 (2020) 104 various clinical and pathological parameters which practically related to prognosis with SSBP2 expression. However, there are several limitations. Firstly, this study was retrospective and was performed in a single medical centre with a limited number of patients. Secondly, other clinical factors, including Eastern Cooperative Oncology Group (ECOG) performance status, which may affect prognosis, were not considered, and SSBP2 was not statistically significant in multivariate analyses with well-known pathological prognostic factors (26). Thirdly, the detailed molecular mechanism underlying the role of SSBP2 expression in ccRCC was not studied. According to TCGA data (http://www.cbioportal.org), 0.4% (0.2%, amplification; 0.2%, missense mutation) of ccRCCs showed genetic alterations in SSBP2. In conclusion, a loss of SSBP2 expression was significantly correlated with aggressive phenotypes and poor recurrence-free survival in ccRCC. Further molecular investigations are needed to clarify the specific pathological mechanism of SSBP2 in ccRCC. Conflicts of Interest The Authors declare that there are no conflicts of interest with regard to the present study.
2019-12-29T14:03:05.082Z
2019-12-27T00:00:00.000
{ "year": 2020, "sha1": "c6f059e2dafb27be608a004ec733f7b5510e7273", "oa_license": null, "oa_url": "http://iv.iiarjournals.org/content/34/1/101.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "0e599d5321bef97a5a48faf97907a5ca579575ab", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264442854
pes2o/s2orc
v3-fos-license
Impact of a ground intermediate transport from the helicopter landing site at a hospital on transport duration and patient safety Background Helicopter emergency medical service provides timely care and rapid transport of severely injured or critically ill patients. Due to constructional or regulatory provisions at some hospitals, a remote helicopter landing site necessitates an intermediate ground transport to the emergency department by ambulance which might lengthen patient transport time and comprises the risk of disconnection or loss of vascular access lines, breathing tubes or impairment of other relevant equipment during the loading processes. The aim of this study was to evaluate if a ground intermediate transport at the hospital site prolonged patient transport times and operating times or increases complication rates. Methods A retrospective analysis of all missions of a German air rescue service between 2012 and 2020 was conducted. Need of a ground transport at the accepting hospital, transfer time from the helipad to the hospital, overall patient transport time from the emergency location or the referring hospital to the accepting hospital and duration of the mission were analyzed. Several possible confounders such as type of mission, mechanical ventilation of the patient, use of syringe infusion pumps (SIPs), day- or nighttime were considered. Results Of a total of 179,003 missions (92,773 (51,8%) primary rescue missions, 10,001 (5,6%) polytrauma patients) 86,230 (48,2%) secondary transfers) an intermediate transport by ambulance occurred in 40,459 (22,6%) cases. While transfer times were prolonged from 6.3 to 8.8 min for primary rescue cases (p < 0.001) and from 9.2 to 13.5 min for interhospital retrieval missions (p < 0.001), the overall patient transport time was 14.8 versus 15.8 min (p < 0.001) in primary rescue and 23.5 versus 26.8 min (p < 0.001) in interhospital transfer. Linear regression analysis revealed a mean time difference of 3.91 min for mechanical ventilation of a patient (p < 0.001), 7.06 min for the use of SIPs (p < 0.001) and 2.73 min for an intermediate ambulance transfer (p < 0.001). There was no relevant difference of complication rates seen. Conclusions An intermediate ground transport from a remote helicopter landing site to the emergency department by ambulance at the receiving hospital had a minor impact on transportation times and complication rates. Supplementary Information The online version contains supplementary material available at 10.1186/s13049-023-01124-7. Impact of a ground intermediate transport from the helicopter landing site at a hospital on transport duration and patient safety Background Time to initiation of advanced care is of essence for acutely injured or critically ill patients and might directly reflect in outcomes [1].In addition to providing advanced prehospital patient care on scene, helicopter emergency medical service (HEMS) facilitates rapid transport to high care facilities [2][3][4].It therefore plays a pivotal role in both prehospital emergency care and timely interhospital transfer of critically ill patients. In order to ensure a swift transfer from the helicopter to the emergency department or treatment unit, in Germany, several guidelines and regulations of the care of acutely injured or poly-traumatized patients outline the localization of helicopter landing sites at hospitals: The German Association of Trauma Surgery defines a helicopter landing site "in close proximity of the emergency room or resuscitation bay" as a prerequisite for a regional trauma center [5].Likewise, the German Social Accident Insurance (IAG) lists a constantly operational helicopter landing site near the emergency department or resuscitation bay as a criterion for hospitals that participate in the care of severely injured patients [6].The directive of the Federal Joint Committee (G-BA) -which is the highest decision-making body of the joint self-government of physicians, dentists, hospitals and health insurance funds in Germany and issues directives for the benefit catalogue of the statutory health insurance funds -requests a helicopter landing site at an advanced emergency care providing hospital that allows for air-bound patient transfer without an intermediate transport on the ground [7]. However, constructional or regulatory provisions -by air traffic regulations or noise pollution control actssometimes do not allow for the installation of a helicopter landing site or public interest site right next to the emergency department.Hence, an intermediate transport from the helicopter landing site to the emergency department by ambulance might be necessary which might lengthen the overall transport time and comprises the risk of line or tube disconnection during the loading processes. The aim of this study was to evaluate if a ground intermediate transport at the hospital site prolonged patient transport times and mission times or increased complication rates. Methods A retrospective analysis of the German air rescue service DRF Luftrettung (DRF Luftrettung gAG, Filderstadt, Germany) database was conducted.The database collects all operational data of the 29 DRF helicopters in Germany.The helicopters are alerted as part of emergency medical services as well as the interhospital retrieval network.The medical crew on the helicopters consists of a pre-hospital emergency physician (mostly specialists in anesthesiology, surgery, or internal medicine) and a HEMS-TC (helicopter emergency medical system technical crew member) moreover qualified as a paramedic.Each mission is documented in a standardized onlinedatabase (HEMSDER-Database, Convexis, Germany).The collected data included flight times as well as medical details on the patient and care. This retrospective study included all missions of DRF helicopters with a patient transported to a hospital from 2012 to 2020.The following times were collected: time from landing at the hospital site to handover in the hospital (transfer time), time from helicopter start at the emergency location or the referring hospital to handover in the hospital (patient transport time), time from helicopter start at the emergency location or the referring hospital to landing at the accepting hospital (flight time), time from helicopter landing at the emergency location or referring hospital to handover in the hospital (patient contact time) as well as the times from patient contact until operational readiness after patient handover (mission time) (Table 1).The following patient characteristics were collected: age, diagnosis, intubated/ventilated, need for vasopressors, use of syringe infusion pumps (SIPs), transport by incubator, transport of patients with a cardiopulmonary assist device (e.g.extracorporeal membrane oxygenation (ECMO) or (percutaneous) ventricular assist devices). As urgency likely differed between primary rescue missions and secondary hospital transfers, missions were categorized in primary rescue missions and secondary interhospital patient transfer missions. Several secondary analyses were performed to depict the many-sidedness of HEMS: Since the above-mentioned provisions mainly apply for the care of multiple injured or trauma patients, a subgroup-analysis was performed segregating primary rescue missions in primary rescue missions for polytraumatized patients and primary rescue missions for all other emergencies. In addition, missions were categorized by day or nighttime and by season; here, spring and autumn were combined to "mid-season" due to similar meteorological conditions including capricious temperatures, rainfall as well as windy or foggy conditions; further, the impact of transports by means of incubators and transports with a paracorporeal support device (extracorporeal membrane oxygenation (ECMO), intraaortic balloon pump (IABP), percutaneous mechanical support (e g Impella®)) was explored.Moreover, information on the helicopter type was gathered where possible to account for differences resulting from diverging helicopter configurations. For the descriptive analysis of numerical variables, the mean and standard deviation were computed, for categorial and dichotomous variables the frequency and proportion in percent were calculated.Chi-squared-test was used for comparative analysis of frequency distributions and t-tests, or ANOVA were calculated for metric variables.Effect sizes were calculated using Cramer's V and R-squared or Cohen's d respectively. A p-value of 0.05 or less was considered statistically significant.Analyses were calculated using programming language R and the package effectsize [8]. Demographics and severity of illness or injury expressed via the National Advisory Committee for Aeronautics (NACA) score differed between the groups of emergency patients (primary rescue) versus retrieval patients (interhospital transport) and are depicted in Table 2 (Table 2).Two thirds of the patients were male with slightly more male patients among the emergency patients.Compared to patients of primary rescue missions with an average of 48.0 years, retrieval patients were markedly older with an average age of 58.0 years (Table 2). Among emergency patients, traumatic single injury, intracerebral hemorrhage, other neurological emergencies like stroke and cardiovascular emergencies were the leading diagnoses.With regards to the severity of the emergency, 41% of these patients were assigned to NACA score 3, 25% to NACA 4 and 30% to NACA score of 5. Intubation was required in 16.4% of the emergency patients and ventilation in 17.5%.Catecholamines were applied on 6.8% with syringe infusion pump use in only 1.6%. In interhospital transfer, main diagnoses included stroke, acute coronary syndrome, vascular emergencies like aortic dissections and neurosurgical pathologies such as intracranial bleeds and acute respiratory distress syndrome and sepsis.51% of these patients were rated with an NACA score of 5. 23,5% of patients were intubated and 30,0% were dependent on a ventilator.21,5% received catecholamine therapy and in 32,1% of the cases syringe infusion pumps were utilized (Table 2).An intermediate ground transport by ambulance occurred in 20.6% of emergency patients and 24.7% of interhospital transfers. Transfer times from the helicopter landing site to the emergency department were 6.3 and 9.2 min for primary rescue and interhospital transfer and prolonged to 8.78 and 13.5 min respectively when a ground intermediate transport took place (p < 0.001; d = 0.34 and 0.38). Patient transport time without and with intermediate transport was 14.8 versus 15.8 min (p < 0.001, d = 0.1) in primary rescue and 23.5 versus 26.8 min (p < 0.001, d = 0.16) for interhospital transfer.Details for flight time and patient contact time are listed in Table 3 (Table 3).Linear regression analysis of selected determinants on patient transport time revealed a mean time difference of 3.91 min for mechanical ventilation of a patient (p < 0.001), 7.06 min for the use of SIPs (p < 0.001) and 2.73 min for an intermediate ambulance transfer (p < 0.001); in primary rescue missions, the difference was 1.69 min for ventilation, 4.52 for use of SIPs and 1.03 min for an intermediate transfer. Mission time -defined from landing at the emergency site or the referring hospital until operational readiness -was 74.2 min versus 78.0 min (p < 0.001, d = 0.13) in primary rescue and 109.6 versus 120.2 min (p < 0.001, d = 0.21) for interhospital transfer cases. When assessing the impact of ventilation, use of SIPs and intermediate ground transport on complication rates, the need for ventilation was associated with an odds ratio of 3.76 (3.41-4.15)and use of SIP with an OR of 3.20 (2.89-3.53)while intermediate ground transport resulted in an OR of 1.28 (1.15-1.43)(Fig. 2). Analysis of variance showed that helicopter model may account for differences in patient transport time (F(3,121859) = 4,287, p < 0.001, R 2 = 0.119) but does not alter transfer time at the hospital site (F(3,120151) = 352, p < 0.001, R 2 = 0.008) (Appendix 1, Supplementary Material 1).Besides, analysis of variance of neither daytime nor season showed significant results of an influence of intermediate transport on the respective times or complication rates.Similarly, an intermediate transport showed no significant effect on times in transports by means of an incubator or in transports of patients with a percutaneous heart or lung assist device.More detailed results in this regard as well as a subgroup analysis of primary rescue missions for polytraumatized patients versus all other emergencies are presented in the Appendix 2 (Tables 1 and 2, Supplementary Material 2). Discussion The study at hand evaluated the effect of an intermediate ground transport by ambulance from the helicopter landing site at a hospital on patient transport times and safety.The analysis was presented for primary rescue missions as well as secondary interhospital retrieval missions since these tend to differ in urgency but also in complexity of patient care and equipment; further, the handover for interhospital retrieval patients might not take place in the emergency department but rather in intensive care units or in theatres and therefore include a longer distance within the hospital.While an intermediate ground transport from a remote landing site by ambulance implies an additional loading on and off process and an additional ground transportation leg, a direct transfer from the helicopter might either involve the use of an elevator in case of roof top landing pads or a walk by foot from a landing site in safe distance from the hospital building to the emergency department. In this study, an intermediate transport did prolong the transfer time of patients from helicopter landing at the hospital to the handover in the emergency department or in the ICU for emergency or retrieval patients respectively by 2.51 min in primary rescue missions and 4.31 min in secondary interhospital transfers on average.This effect -yet weak in effect -further diminished when assessing the overall patient transport time and there was no relevant impact on the rate of documented complications seen.HEMS has been established to provide emergency care in otherwise in-accessible locations and a timely hospital admission for critically injured patients [9].In the past, numerous studies have evaluated the effects of HEMS compared to ground-based emergency services and weighed the benefits of a more intense and invasive treatment and faster transport by HEMS against higher expenses [4,10,11].However, there is scarce investigation on the transfer time from the helicopter landing site to the hospital.Zanic and colleagues report a mean heliport-to-hospital time of seven minutes in Croatian emergency air transport for acute chest pain patients which is akin to our findings [12].Furthermore, only one published study was identified addressing a transport delay resulting from a remote helicopter landing site [13].Lerner and colleagues determined a time delay in emergency department arrival of trauma patients resulting from a remote helipad requiring an ambulance transport at a trauma center of 5.2 ± 2.3 min by simply taking the time difference between landing of the helicopter and the arrival at the emergency department but did not compare to transports without intermediate transport.In contrast, this study at hand compared the actual transfer times of ground transport by ambulance (from a remote helipad) to ground transport by foot (from an adjacent helipad) therefor depicting a more realistic comparison.It revealed an effective delay of as few as 2.5 min for emergency patients.While for some patients who are in extremis, any time saving is crucial, the reported delay appears short against a background of a total patient transport time of 16 min and a total prehospital patient contact time of about an hour.Results of the linear regression analysis underlined this trend by displaying the influence of determinants on total patient transport time: mechanical ventilation of a patient or the use of SIPs both exhibited a longer mean differential time span compared to SIPs (5.69; t-value − 17.2) an intermediate ambulance transfer. The number of reported complications was low at 1% in our cohort, missions during nighttime presented with slightly higher complication rates of 1.3% and the rate of documented complications increased up to 3% in missions with extracorporeal support.While the effect of an intermediate transport on the complication rate was significant, its effect size was minimal (Cramer's V 0.01).As very likely only complications with clinical relevance were documented, there might be a risk of underreporting critical incidents.While a recent meta-analysis on interhospital transport of critically ill patients comprising 14,969 transports found a pooled rate of adverse events in interhospital transport of 11%, the rate of complications was shown to be as low as 1% when specialized teams performed the retrieval [14,15]. In contrast to deductions in the past that an additional ground transport leg from the remote helipad would lead to an inherent risk for the patient in form of jostling Fig. 2 Odds ratio for complications over all mission types for mechanical ventilation, use of syringe infusion pumps and intermediate transport: odds ratio and confidence interval during the loading processes or dislodgements of tubes and lines, the number of relevant complications did not distinctly differ between missions with or without ambulance ground transport in this study [16].Moreover, the need of mechanical ventilation or use of SIPs seemed to have a markedly higher impact on complication rates compared to an intermediate transport at the hospital. As external circumstances might influence performance and framework conditions, we also took daylight and weather conditions into consideration: With the majority of HEMS sites only operating during daylight, 86% of the missions were performed by day and 11% during nighttime.Even though there was a tendency of a higher likelihood of complications for nocturnal missions (1.4% versus 1.0%) and missions with percutaneous cardiopulmonary assist devices (3.0% versus 0.9%), the occurrence of an intermediate transfer did neither have an impact on complication rates nor transfer times. In addition, neither different helicopter models resulting in disparate patient stretcher handling mechanisms nor seasonal meteorological effects showed a signal to influence the time delay due to intermediate ground transport or complication rates. Albeit these results question a negative effect of an intermediate transport by ambulance, it should not be left unmentioned that the helicopter landing site probably should be within a certain proximity to the emergency department: in this cohort, in the intermediate transport group, the transfer time from the helicopter landing site to the emergency department was about 9 min on average in primary rescue missions and 13 min in secondary interhospital transports with the majority (75%) of intermediate transports taking no longer than 12 and 17 min respectively. This study has several limitations of which many are inherent to the retrospective design.Data quality relied on the accuracy of the documentation during the missions.The documentation of complications didn't allow to determine their nature or whether complications occurred during air or ground transport.Further, there are scarce data on incubator missions or transports on paracorporeal devices as these are often performed with a dedicated team using a separate documentation.Unfortunately, attempts to retrieve these data by deduction from other information such as hospital sites and their helicopter landing facilities were unsuccessful.Hence, the results of the corresponding analysis should be interpreted cautiously. The missing link to clinical patient outcome -as documentation of the transport ended with handover in the hospital -presents a major shortcoming of this study. The large sample size provided a comprehensive picture of helicopter transports and provided a robust data fundament.However, this entails the risk that statistical significance is easily reached due to the high number of observations rather than the magnitude of the effect.Effect sizes were calculated and provided to account for here. Conclusions In conclusion, an intermediate transport from a remote landing site at the hospital did prolong the transfer time of patients from the helicopter landing to the handover in the emergency department or in the ICU for emergency or retrieval patients by few minutes.This effect -yet weak in effect -further diminished when assessing the overall patient transport time.An impact on the rate of documented complications was of subordinated effect. Fig. 1 Fig. 1 Number of transports of a HEMS association during a nine-year period Table 1 Description of the time intervals that were examined time from helicopter landing at the emergency location or referring hospital to handover in the hospital mission time time from patient contact until operational readiness after patient handover Table 2 Demographics and medical device use: age and sex of patients, NACA score and number of intubated and ventilated patients as well as requirement of catecholamine therapy or syringe infusion pump use.Age in years: mean and standard deviation; all other: number and percent of the respective cohort Table 3 Transfer and transportation times: please refer to Table 1 for definitions of time intervals examined
2023-10-25T14:05:38.676Z
2023-10-24T00:00:00.000
{ "year": 2023, "sha1": "ce30c9e97f90a73996c85e1b317ce500fab5c1cb", "oa_license": "CCBY", "oa_url": "https://sjtrem.biomedcentral.com/counter/pdf/10.1186/s13049-023-01124-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55ce9330b089beb07c3d090557c4bfc1b42e6507", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18932626
pes2o/s2orc
v3-fos-license
Early Enteral Feeding After Living Donor Liver Transplantation Prevents Infectious Complications Abstract Infectious complications, including bacteria, virus, and fungus, often occur after liver transplantation and are the most frequent causes of in-hospital mortality. The current study prospectively analyze the effect of early enteral feeding in patients after living donor liver transplantation (LDLT) Between January 2013 and August 2013, 36 patients underwent LDLT. These patients were randomly assigned to receive enteral formula via nasointestinal feeding tubes [enteral feeding (EN) group, n = 17] or maintenance on intravenous fluid until oral diets were initiated (control group, n = 19). All patients completed the study. The pretransplant and perioperative characteristics of patients did not differ between the 2 groups. The incidence of bacterial infection was significantly lower in the EN group (29.4%) than in the control group (63.2%) (P = 0.043). In addition, the incidence of bile duct complications in the EN group was lower than in the control group (5.9% versus 31.6%, P = 0.041). Multivariate analysis showed that early enteral feeding was closely associated with bacterial infections (odds ratio, 0.178; P = 0.041). There was no statistically significant difference in nutritional status between the 2 groups. There were no cases of in-hospital mortality. Early enteral feeding after LDLT prevents posttransplant bacterial infection, suggesting the possibility of a reduction of in-hospital mortality as a result of decreased infectious complications. INTRODUCTION N utrient metabolism is changed in patients with liver disease because liver is a central organ for metabolism. Thus, protein malnutrition and imbalance were developed as a result of progressive liver disease. [1][2][3][4] Protein-energy malnutrition is common in patients with end-stage liver disease requiring liver transplantation (LT). Recently, only the sickest patients with the highest model for end-stage liver disease scores have received transplants because organs from deceased donors are relatively scarce. These patients, who are significantly malnourished and physically deconditioned, have an increased risk of posttransplant morbidity and mortality. 5,6 Infectious complications have a significant impact on the survival of patients, who have received liver transplants because of the invasive surgical procedures involved and the need for immunosuppression and are closely related with in-hospital mortality. 1,7,8 The preventive strategies of infectious complications after transplantation improve short-term outcomes after organ transplantation. Nutritional support has been recognized as a vital component of the management of liver transplant recipients to help patient recovery. The advantages of enteral nutrition compared with parenteral nutrition as nutritional support for critically ill patients with respect to infectious complications are well recognized. 9 European Society for Parenteral and Enteral Nutrition guidelines recommend early initiation of normal food intake or enteral feeding after organ transplantation as soon as possible. 3,10 Several studies have examined the prevalence of posttransplantation bacterial sepsis in patients undergoing deceased donor liver transplantation (DDLT), and the benefits of perioperative nutritional therapies were proved in this setting. 4,11 Many transplant centers in Korea have used living donors as a source for LT because of a limited number of available deceased donors. In contrast to DDLT, the evidence of the beneficial effects was little when living donor liver transplant (LDLT) recipients received early enteral nutrition. In this study, we prospectively analyzed nutritional parameters in a group of patients undergoing early enteral feeding after LDLT and the relationship between enteral feeding and short-term clinical outcomes. Patients Current study was designed a pilot randomized control trial to evaluate perioperative changes in nutritional parameters in the early posttransplant period after LDLT. The study included 36 consecutive patients who underwent elective LDLT at Samsung Medical Center from January 2013 to October 2013. The study was approved by the Samsung Medical Center's Institutional Review Board in Seoul. All participant patients provided written consent in the study. The patients were divided into 2 groups by using a block method for randomization: a ''control'' group (n ¼ 19) and an ''EN'' group (n ¼ 17). None of the patients were excluded from the analysis and all 36 patients were included in the per-protocol analysis. Assessment of Nutritional Status Nutritional assessment was performed by experienced one dietician during thorough evaluation before LT. Body mass index (BMI) was calculated using body weight (kg)/height (m 2 ). Body weight was measured before transplantation. Ideal body weight was computed by estimated weight. Mid-arm circumference (MAC, cm) was measured with a spring tape at the midpoint between the tip of the acromion and the ulnar process on the nondominant side of the arm hanging. Triceps skinfold measurement was measured in nondominant arm using a Harpender Skinfold Caliper. Midarm muscle circumference (MAMC) was calculated by the formula: MAMC ¼ MAC -[p triseps skinfold thickness]. 11 The subjective global assessment (SGA) integrated using weight loss or gain, dietary history, gastrointestinal symptoms, medical history, coexisting medical conditions, physical activities, and physical signs of malnutrition of the patients. Malnourishment was defined as less than 5% of MAMC. Figure 1 shows the nutritional intervention schedule used in this prospective study. We provided enteral nutrition after LDLT via a nasogastric tube placed in the stomach several days after the operation. We routinely started enteral feeding within 12 hours of tube replacement for patients without enteral anastomosis. Enteral nutrition was started at 20 mL/hour for 12 hours and, if well tolerated, the enteral infusion rate was increased to 60 mL/hour by postoperative 5 days. A low residual enteral liquid diet (Mediwell RTH 500 1 , MDwell.Inc, Seoul, South Korea) was administered. Enteral feeding was discontinued once a patient could eat more than 50% of the provided regular diet. Antimicrobial Prophylaxis Perioperative prophylaxis consisted of intravenous cefotaxime (4 g/day) and ampicillin sulbactam (6 g/day) given 4 times/day for 2 days after LDLT, and was started 30 minutes before the operation. If bacterial sepsis was clinically suspected, broad-spectrum antibiotics were administered empirically. Central venous catheters placed in the internal jugular vein were usually removed within 7 days after LDLT and replaced with a peripheral catheter. Outcomes All outcomes were evaluated during the first 3 months after LDLT. The primary endpoint was occurrence of infectious complications and the secondary end points were total length of stay in the hospital, improvement in nutritional status, episodes of acute rejection, bile duct complications, graft failure, and mortality. Bacterial, fungal, and cytomegalovirus (CMV) infections were continuously monitored after LDLT. Bacterial and fungal infections were diagnosed when they had clinical manifestations and causative organisms were isolated simultaneously. Cytomegalovirus infection was diagnosed as a CMV pp65 antigen-positive cell number greater than 1 positive cell per 200,000 white blood cells in whom CMV antigen was not detectable previously. 12 The procedures used for biliary reconstruction in LDLT recipients and the prevention of infectious complications after LDLT were described previously. 13,14 Bile duct complications were defined as biliary stricture or biliary leakage after LDLT. Statistical Analysis Continuous variables were presented as the median and range. Data resulting from categorical variables are expressed as percentages or counts. Differences in continuous variables between the control and EN groups were analyzed by the Mann-Whitney U test. Differences in frequency were analyzed by the x 2 test or Fisher exact test. Sequential nutritional assessments between the 2 groups were evaluated using a mixed model test. Multivariate analyses were performed using the logistic regression model. All statistical analyses were performed with SPSS 21.0 software. All reported P values were 2-sided, and a P value <0.05 was considered statistically significant. Patients Clinical features of the subjects in both groups are shown in Table 1. The median age of the control group and EN group was 52 years (range: 36-64) and 52 years (range: 43-65), respectively. No significant differences in sex, BMI, or past history of hypertension and diabetes were noted between the control and EN groups. The 2 groups were comparable on the basis of diagnosis, Child-Pugh class, and model for endstage liver disease score. All patients received a right liver graft from a living donor and underwent duct-to-duct biliary anastomosis for reconstruction. There were no statistical differences in graft-to-recipient weight ratio, graft volume/standard liver volume, recipient operative times, donor operative times, cold ischemic time, warm ischemic time, steatosis, postoperative intensive care unit (ICU) stay, and hospitalization between the 2 groups (Table 2). Two patients in the EN group could not tolerate early enteral feeding; one had ileus and the other had vomiting. The remaining 15 patients who received early enteral feeding tolerated it well. Nutritional Changes Between Early Enteral Feeding and Control Nutritional status based on BMI, MAC, triseps skinfold thickness, subjective global assessment, and MAMC at pretransplant and postoperative 1 week, 1 month, and 3 months was not different between the EN group and control group (Fig. 2). The proportion of malnourished patients at pretransplant was 10.5% in the control group and 23.5% in the EN group. At 1 month after LDLT, the proportion of malnourished patients in the control group was higher than that in the EN group (31.6% versus 23.5%; Fig. 3) although the difference Clinical Outcomes Early enteral feeding did not significantly influence ICU stay and hospitalization. Only 1 patient in the EN group developed biopsy-proven acute cellular rejection during the first 3 months. No significant difference in the incidence of biopsy-proven acute cellular rejection was noted between the 2 groups after LDLT. The incidence of bile duct complications in the control group was higher than that in the EN group (31.6% versus 5.9%, P ¼ 0.041). Differences in the rates of infection between the 2 groups were detected. Bacterial infections occurred in 63.2% of the control group compared with 29.4% of the EN group (P ¼ 0.043). Multivariate analysis revealed that early enteral feeding was closely associated with bacterial infection (odds ratio, 0.178; 95% confidence interval, 0.034-0.928; P ¼ 0.041), but was not related to bile duct complications (odds ratio, 0.066; 95% confidence interval, 0.003-1.339; P ¼ 0.077). No statistically significant differences existed between the 2 groups for CMV, fungal, and viral infection excluding CMV (Table 3). There were no cases of graft failure or in-hospital mortality. DISCUSSION Nutritional support provided immediately posttransplant facilitates nutritional and medical recovery in DDLT. 1,2 Most patients undergoing LT have poor nutrition and are therefore good candidates for early enteral nutrition. The effects of postoperative enteral feeding in LDLT, however, have not been analyzed. Compared with DDLT, LDLT involves smaller grafts and is scheduled as relatively elective surgery. The small size of graft in adult to adult LDLT has the risk of graft failure and patient death because of increased portal venous pressure, impaired bowel motility, bacterial translocation, ascites production, hyperbilirubinemia, and bleeding tendency by prolonged prothrombin time. 15 These factors might increase the risk of bacterial sepsis in LDLT compared with DDLT. In the current study, the incidences of bacterial infections or biliary complications in LDLT recipients who did not receive enteral feeding was significantly higher than that in the early EN group. Multivariate analysis reported that early enteral feeding was closely associated with the prevention of bacterial infections, but was not related to biliary complications. We discovered that early enteral nutrition also reduced the incidence of bacterial infection after LDLT. The period spent in ICU and the time of hospitalization after LDLT, however, were not influenced by early enteral feeding. Infection is one of the most serious complications after liver transplantation. Bacterial infections in liver transplant recipients are associated with an increased mortality rate. 8 The intestine contains the largest population of bacterial flora in the body, and both the intestinal immune system and mucosal barrier system play key roles in protecting against bacterial infection. 16 Bacterial overgrowth and suppression of the intestinal antibacterial defense system are particular problems in patients with hepatic dysfunction, and are caused by portal hypertension resulting in intestinal edema and decreased peristalsis. 17 Previous studies have indicated that sepsis is related to bacterial translocation and enterogenous endotoxemia, which results from intestinal mucosa barrier injury by total hepatic vascular exclusion and reperfusion. 8,18 Enteral nutrition stimulates bile flow and portal blood flow, prevents intestinal mucosal atrophy, and preserves intestinal structure and function. 16 In the current study, the overall proportion of patients with a bacterial infection was 63.2% in the control group and 29.4% in the EN group. In addition, trends toward a decrease in CMV infection were detected in the EN group. A patient's nutritional status can worsen rapidly during the first 2 weeks postoperative as a result of preoperative malnutrition, surgical stress, immunosuppressive therapy, postinterventional complications, postoperative protein catabolism, and fasting. 19 Thus, optimizing the nutrient intake over this period is critical to promote wound healing and hepatocyte recovery. 1,20 The goal of nutrition therapy in the early posttransplant period is to ensure adequate protein and calorie provision to avoid protein breakdown. 5 Cytokines play a major role in the inflammation. Therefore, the association between posttransplant complications and T-helper cytokines level has been studied to understand immune system modulation. 21 Recent study reported that patients with biliary complications had higher interleukin (IL)-2, IL-4, and IL-12 levels than patients without. We suspected that early enteral feeding may affect the serum cytokine levels. 22 Current study, however, did not include the relation between serum cytokine levels and biliary complications, so we could not draw a conclusion, which early enteral nutrition did not prevent biliary complications after LDLT. Studies indicate that many patients are malnourished (30%-60%) at transplantation. 1,2 A recent study reported that perioperative nutritional therapy improved survival in patients with low skeletal muscle mass, but not in patients with normal or high skeletal muscle mass. 23 Severe malnutrition, however, was identified in only 16.7% (n ¼ 6) of our patients because hepatocellular carcinoma was the main etiology for transplantation and LDLT does not involve a wait time. Therefore, graft failure and perioperative mortality were not observed in our series. As a result, we could not investigate the correlation between early enteral feeding and mortality. In addition, early enteral feeding did not significantly improve the nutritional status. The current study had several limitations. First, our study did not compare calorie intake between the 2 groups. Second, the number of malnourished patients was low because of the nature of LDLT. Third, there was an inadequate selection bias because of the small number of patients. Fourth, the low number of events related to graft or patient survival could have obscured the effect of nutritional status on these parameters. In conclusion, the current study encourages the use of early enteral feeding after LDLT. Early enteral feeding was well tolerated in most patients and resulted in a lower rate of bacterial infection and bile duct complications compared with patients who did not received early nutritional support.
2018-04-03T01:41:22.368Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "9d35fdb7e82f36a90a4c00e2f6c29c594964abf5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000001771", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d35fdb7e82f36a90a4c00e2f6c29c594964abf5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3944967
pes2o/s2orc
v3-fos-license
An Ensemble Approach to Knowledge-Based Intensity-Modulated Radiation Therapy Planning Knowledge-based planning (KBP) utilizes experienced planners’ knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH) of new cases. In the regression-based KBP framework, previously planned patients’ anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT) cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models. Knowledge-based planning (KBP) utilizes experienced planners' knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH) of new cases. In the regression-based KBP framework, previously planned patients' anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT) cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models. Keywords: treatment planning, dose volume histogram prediction, regression model, machine learning, ensemble model, statistical modeling inTrODUcTiOn In radiation therapy, high quality treatment plans are crucial for reducing the possibility of normal tissue complications while maintaining good dose coverage of planning target volume (PTV). For intensity-modulated radiation therapy (IMRT), it is especially important to fully utilize the healthy tissue sparing potential enabled by the advanced treatment delivering system. However, the optimal achievable organ-at-risk (OAR) sparing is not known pre-planning, and planners need to rely on their previous experience, which makes the planning process subjective, iterative, and susceptible to intra-and inter-planner variation. Knowledge-based planning (KBP) (1)(2)(3)(4)(5) has been shown to be a powerful tool for guiding planners and physicians to optimal achievable OAR dose volume histograms (DVHs) based on previous cases planned by experienced planners. In a previously proposed regression-based KBP framework (2), the workflow is as follows: (i) principle component analysis (PCA) is conducted for OAR DVHs in the training set, and the first three principle component scores (PCS) and corresponding basis vectors are stored; (ii) pre-determined geometry information related to treatment planning goals, also referred to as features, are calculated for each patient; (iii) PCS of OAR DVH are fitted to features to generate a prediction model; (iv) features are calculated for new patients; and (v) best achievable OAR DVHs are calculated for new patients using the fitted model and the previously calculated PCA basis vectors. In step (iii) of the previous framework, stepwise regression is used to select features and estimate the linear model. The method automatically picks several most important features step by step based on the significance of features. This approach is easy to implement and the output is interpretable. With careful training data preprocessing and feature selection, stepwise has achieved good results in OAR DVH prediction in research settings (6)(7)(8)(9)(10)(11)(12). However, there are some theoretical issues about this procedure, which could potentially result in some instabilities of the overall model training process. While stepwise regression has been very successful in the context of KBP, potential disadvantages of stepwise regression are well documented. First, it potentially suffers from overfitting if the size of the training set is relatively small compared to the number of features. This is because the procedure attempts to fit many models and the p-values, which are used as feature selection criteria, are not corrected for the number of hypothesis tested. In addition, stepwise regression does not cope with collinear features well. If two features are highly collinear, stepwise usually selects just one and discard the other. Ideally, if several collinear features are predictive of the outcome, all of these features should be selected to prevent overfitting and reduce model variance. The purpose of this study is to improve the regression modeling aspect of KBP. Empirically, different regression methods perform well in different scenarios, such as different number of training cases, presence of collinear features, and presence of outlier cases. In this work, we develop an ensemble learning method to combine the strengths of these individual models and improve KBP model robustness. MaTerials anD MeThODs individual Models As a comparison to our proposed ensemble model, we study four individual regression models, including ridge regression (13,14), lasso (15), elastic net (16), and stepwise regression with forward feature selection. These models also serve as base learners for the final ensemble model. The latter three models share the same objective function where X N P ∈ ×  denotes P feature value from N training cases, Y N ∈  denotes OAR DVH PCS of cases in the training set, and β∈ P denotes regression coefficients corresponding to P anatomical features, such as PCS of distance-to-target histogram. Detailed descriptions of feature extraction and dimension reduction for KBP can be found in Ref. (1,2). The last term, known as the penalty term, balances the bias and variance of the trained model. The goal of KBP is to obtain regression coefficients β based on cases previously planned by experienced planners, and when a new case needs to be planned, the optimal OAR DVH can be calculated simply using the model predicted PCS of Xβ. In ridge regression, the penalty term φ(β) is the square of 2-norm of the regression coefficients β; in lasso, the penalty term is the 1-norm of β; and in elastic net, the penalty term is simply a linear combination of 1-norm and 2-norm squared: The penalty weights λ1 and λ1 are selected based on internal cross validation. Forward selection, a type of stepwise regression, is the last individual model. It finds the most significant features to add based on the data step by step, hence the name. When adding features no longer improves the model by a certain preset p-value threshold, the feature selection step terminates. The selected features are fitted to the data with ordinary least square, while the rest of the features are discarded. The ensemble Model Many ensemble models have been proposed over the years in the field of machine learning, such as random forest (17), boosting (18), bagging (19), and stacking (20). The basic idea behind these ensemble models is to develop an array of simple models, often referred to as base learners, and combine these models to form a better (e.g., lower variance, higher accuracy, or both) model for prediction (21). These models essentially seek to combine knowledge learned by different models via data resampling and/ or adding another layer of optimization. The primary motivation of our ensemble model is to make KBP more robust and adaptive. In different settings, different regression models perform well, and none of these individual models consistently performs better than other models. For instance, stepwise regression is widely known to be unstable (22), but as shown in Section "Results, " it can significantly outperform other more stable models such as ridge regression in certain settings. However, it is not feasible to test out individual models every time a new model is fit. Therefore, we propose an ensemble model, which performs well in all settings. Model Stacking In our proposed model, we combine the aforementioned individual models using model stacking method. A previous study demonstrated that even stacking ridge regression alone with different penalty weight λ improved model generalization performance, and stacking models with different characteristics generated further improvement (20). The proposed ensemble approach is shown in Eqs 3-5 First, individual models βk, where k ∈ [1, K] denotes individual model index, are trained separately on the training dataset repetitively with all the training data except for case n. Prediction of the in-training-set but out-of-model case zkn is then generated (Eq. 3). The process is repeated until all the models have covered all cases in the training set. Subsequently, the model weights αk * are optimized to minimize internal cross validation error, as shown in Eq. 4. A non-negative constraint is applied to prevent overfitting and increase the model interpretability. This step of optimization is done on the metadata, and the prediction results of each model for each case are used to optimize the model weights. The individual models that perform well in the prediction task tend to get larger weightings. The K individual models βk are combined and used for prediction of DVH PCS Y (Eq. 5). Note that the sum of optimal model weights αk * is not constrained to 1, as one would intuitively expect. This is due to the distinct properties of the individual models in the ensemble. The regression coefficients by stepwise regression are usually too large due to lack of constraint and thus need shrinkage. On the contrary, the other three regression methods tend to under-fit, especially for noisy training data, i.e., data with high variance that cannot be explained by any features in X. In other words, even if we have just one model in the "ensemble, " the model weight is still highly unlikely to be 1 (usually smaller than 1 for stepwise and greater than 1 for penalized linear regression methods). In practice, we observe the sum of αk * is usually between 0.5 and 1.5. The ensemble in this study consists of nine models, including stepwise, ridge, lasso, and elastic net with six different λ2-to-λ1 ratios. Figure 1 shows one example of the model weights from the individual models. This model is built using 50 prostate sequential boost cases. Y is the bladder DVH PCS1, and X consists of bladder anatomical features. All features are standardized before training, thus the weights of different features are in the same scale. It is apparent that regression coefficients differ from model to model, even though these are all variants of linear regression models. Note that model 1, stepwise regression, uses the least number of features, and model 2, ridge regression, evidently underfits. Model-Based case Filtering In previous studies, it has been pointed out that automatic outlier removal requires further investigation (12,23). We propose to incorporate a model-based automatic outlier removal routine in the ensemble model to ensure model robustness and address the volatile nature of clinical data. We utilize the cross validation metadata native to the proposed ensemble method to identify and remove impactful dosimetric and anatomical outliers. The two scenarios of outliers have different impact on the training of regression models, as we illustrate in this section. Note that by our definition outliers only exist in training sets, all cases in testing sets are predicted. Cases that would be defined as outlier cases if they are in a training set can still be predicted by a trained model, but with less accuracy. These special cases can be identified with the same approach as we identify outlier cases (see Model-Based Case Filtering Method), and case-based reasoning can be used to improve the outcome of treatment planning, but that is out of the scope of this study. We aim to improve prediction accuracy of the KBP framework with a different modeling technique, without significant changes to the overall workflow. Outliers Clinical treatment planning varies from case to case, with different sparing and coverage considerations. With the aforementioned KBP framework, we assume a linear model can successfully represent a majority of training cases. For some cases in the database, this assumption does not hold. We refer to these cases in the training dataset as outlier cases. In this section, we shall present our insight on outlier cases and provide an intuitive explanation of effects of outliers on knowledge-based modeling. Anatomical Outliers and Dosimetric Outliers The first type of outliers is anatomical outliers. In this study, we define anatomical outliers as cases with anatomical features that are distant from normal cases, and possibly come from a different distribution. In KBP, anatomical outliers refer to cases with uncommon anatomical features relevant to DVH prediction, such as abnormal OAR sizes, unusual OAR volume distributions relative to PTV surface. Generally, anatomical outliers are more likely to deviate from the linear model, as illustrated in Figure 2, and when they do, the effect of these cases are generally larger than normal cases due to the quadratic data fidelity term (first term in Eq. 1) of the regression model. Therefore, it is necessary to identify anatomical outlier cases that are detrimental to model building and remove those from the model before training. Other than anatomical outliers, there are cases that are detrimental to model building due to limited OAR sparing efforts and/or capabilities. These are considered to be dosimetric outliers in this work. Dosimetric outliers include, but are not limited to (1) treatment plans with inferior OAR sparing and (2) wrongly labeled data, such as 3D plans mixed in IMRT plans. Outliers' Effect on Regression Models In this section, we illustrate the effect of outliers on the overall regression model with one-dimensional simulated data. Figure 2A shows that anatomical outliers follow the same underlying X-to-Y mapping. However, the true underlying relation may not be well approximated by linear regression outside the normal X range. Attempting to fit linear regression with anatomical outliers mixed in the training set will potentially deteriorate the model. Therefore, the actual effect of anatomical outlier in different feature directions in the context of KBP needs careful assessment. Figure 2B illustrates the effect of dosimetric outliers. Dosimetric outliers in the training set are expected to increase model variance and deviate the model. Note that this numerical demonstration isolates the effect of outliers on regression on a single feature, and it simplifies the influence of outliers on the overall modeling process. In our clinical knowledge-based modeling, we extract nine features from each case to construct the feature vector X. However, not every feature contributes to the final model equally. In stepwise regression, relevant features are picked based on correlation with the outcomes variable (i.e., DVH PCS). In penalized regression methods, features are implicitly selected with less relevant features given very small regression coefficients as a result of the penalty term. The feature selection step, while not considered here, is also affected by outliers. When anatomical outliers are involved in the training process, the features selected are potentially different from the set of features selected, if the model is trained without outliers. Prediction Performance Measure Weighted root mean squared error (wRMSE) is defined to evaluate model prediction accuracy: Weighted root mean squared error measures the overall deviation of predicted DVHs from ground truth DVHs, which are clinically planned. Weightings are introduced to emphasize higher dose regions of DVHs, which are generally considered to be of more clinical significance in OAR dose predictions. Here w Nw w i j N j denotes the normalized weighting factor for bin i of DVH curves. For evaluation of dose to bladder and rectum, we use the linear relative weighting wj of 50-100 linearly increases from 0 Gy to prescription dose. For evaluation of dose to parotids in head and neck cases, wi is set to Gaussian centered at median dose, with SD of 2 Gy. If wi is set to a constant number, then wRMSE reduces to standard RMSE. Model-Based Case Filtering Method To further improve the robustness of the ensemble model, cases with the highest s% median (of all individual models) internal cross validation wRMSE error are dropped from the training set. The percentage threshold s is selected to balance the tradeoff between model robustness and accuracy. Empirically, we find that 10% is generally a good choice, even though the number of actual outlier cases is unknown and may differ from 10% of the total case number. All the experiments in the following section are conducted with the pre-determined 10% threshold. The workflow of the ensemble model with model-based case filtering is shown in Figure 3. Note that the whole process is done automatically without manual intervention. experimental Design This retrospective study uses anonymized clinical plan data and has received permission from Duke University Medical Center's institutional IRB. All clinical plans were planned using Varian Eclipse™ Treatment Planning System (Varian Medical Systems, Inc., Palo Alto, CA, USA). All experiments were performed on a PC with Intel Xeon E5-2623 CPU and 32 GB of RAM running Windows 10 Enterprise 64-bit operating system. In order to quantitatively evaluate the robustness of these regression methods in various challenging clinical environment, we test the aforementioned models with limited training set size, training sets contaminated with anatomical outliers, and training sets contaminated with dosimetric outliers. In our outlier robustness tests, we purposefully mix pre-defined outlier cases into the training set and validate the final model with normal cases. The reason for adding outlier cases is to add controlled variation to the dataset and evaluate the robustness of the proposed model. Details regarding types of data used in the experiments are summarized in Table 1. Robustness to Limited Training Set Size In clinical practice, planners do not necessarily have many cases for every treatment site. This is particularly true when a new treatment technique, such as simultaneous intensity boost, is recently utilized in the clinic and the existing model built for existing treatment techniques may not predict the achievable DVH accurately due to the OAR sparing capability difference. Sometimes models need to be built when only a small number of cases (~20) are available. It is critical that the regression model is capable of resisting overfitting the random variation of training cases. In this experiment, 166 prostate PTV cases are retrospectively retrieved from the clinical database. Twenty prostate cases are used as the training set, and the remaining 146 cases are used as validation set to quantitatively evaluate the prediction accuracy of each model. Robustness to Anatomical Outliers In clinical databases, not every previously treated case is helpful for predicting future cases even when the treatment plans are of high quality. If the anatomical features are very different from the majority of all cases than the linear assumption may not hold, as demonstrated in Figure 2, and the anatomical features are potentially detrimental to the model. To simulate the effect of anatomical outliers on the plans, we train a model with 10 prostate cases treated with lymph nodes and 40 prostate cases treated without lymph node. The trained models are subsequently validated with 111 cases that do not involve lymph nodes. Robustness to Dosimetric Outliers Dosimetric outliers do not follow the same conditional distribution as normal cases and are expected to be easier to be identified with cross validation. Increase of dosimetric outliers in training data tends to shift the overall model toward inferior plan DVHs and gradually make the plan less optimal (23). In this section, we evaluate the robustness of individual models and the ensemble model with training set contaminated by two types of dosimetric outlier plans: (i) inferior dose sparing and (ii) mis-labeled sparing decisions. For KBP, it is crucial to get reliable predictions even in the presence of sub-optimal plans. Here, we simulate the sub-optimal plans with dynamic conformal arc plans. Compared with IMRT plans, conformal arc plans have evidently inferior OAR sparing capability. Our training data consists of 40 prostate IMRT cases and 10 prostate conformal arc plans, and the validation set includes 110 prostate IMRT plans. The experiment is designed to test the model robustness in the extreme settings to evaluate the model robustness in challenging situations. In clinical practice, it is not always feasible to spare both parotids due to geometric factors. A previous study has shown that parotid-sparing decisions affect KBP predictions, and separate models should be built for single-side parotid sparing and . Forty prostate with seminal vesicle cases and 10 prostate cases with lymph node cases are used as the training set; 111 cases prostate with seminal vesicle cases are used as the validation set. For bladder prediction, the proposed ensemble method predicts significantly better than stepwise (p = 0.013), ridge (p < 0.001), lasso (p = 0.002), and elastic net (p < 0.001); for rectum prediction, the proposed ensemble method predicts significantly better than ridge (p < 0.001), lasso (p < 0.001), elastic net (p < 0.001), and performs similarly well as stepwise (p = 0.210). For bladder prediction, the proposed ensemble method predicts significantly better than stepwise (p < 0.001), ridge (p < 0.001), lasso (p < 0.001), and elastic net (p < 0.001); for rectum prediction, the proposed ensemble method predicts significantly better than ridge (p < 0.001), lasso (p < 0.001), elastic net (p < 0.001), and stepwise (p < 0.001). bilateral parotid sparing to get better prediction accuracy (24). We retrieve 228 bilateral parotid-sparing head and neck cases and 10 single-side parotid-sparing cases from our institutional clinical database. The sparing decisions are first obtained from clinical prescription documentations and subsequently checked in dose statistics to correct for decision changes. We randomly select 80 bilateral cases as the training set and then add 10 single-side sparing cases as mis-classified cases. The remaining 148 bilateral cases are used as the validation set. resUlTs robustness to limited Training set size The ensemble method outperforms all individual methods significantly, as shown in Figure 4. Note that ridge regression performs particularly poorly in bladder prediction, indicating that there is some intrinsic sparsity in the feature space, and ridge regression, which does not utilize that sparsity, underfits significantly due to over-shrinking of regression coefficients. Stepwise performs poorly in rectum predictions, due to overfitting. robustness to anatomical Outliers Figure 5 shows prediction errors, measured by wRMSE, of individual models and the ensemble model. For bladder predictions, the ensemble model outperforms all individual models, while stepwise, lasso, and elastic net perform similarly. In the case of rectum predictions, the ensemble method again outperforms ridge, lasso, and elastic net, and performs similarly well as stepwise. Ridge regression fails to predict accurately for either task. robustness to Dosimetric Outliers Inferior Plans Figure 6 shows, for both bladder and rectum prediction, lasso, elastic net, and the proposed ensemble regression method predict . For bladder prediction, the proposed ensemble method predicts significantly better than stepwise (p < 0.001), ridge (p < 0.001), and performs similarly well as lasso (p = 0.753) and elastic net (p = 0.841). For rectum prediction, the proposed ensemble method predicts significantly better than stepwise (p < 0.001) and ridge (p < 0.001), and performs similarly well as lasso (p = 0.365) and elastic net (p = 0.373). equally well, while stepwise and ridge are no longer usable due to significant amount of error. Mis-Classified Sparing Decisions The validation set prediction errors of each model are shown in Figure 7. The proposed ensemble model significantly reduces prediction error, compared with stepwise (p = 0.026) and ridge (p < 0.001), and performs equally well as elastic net (p = 0.091) and lasso (p = 0.115). DiscUssiOn In summary, we propose an ensemble regression model to address two problems that we are facing in KBP. First, different individual regression models perform well in different settings, such as different number of relevant features, number of cases, and existence of outliers. It would be very labor intensive to manually select the optimal model every time a model is fitted. Second, to ensure the most accurate model training, data-preprocessing, including anatomical and dosimetric outlier removal, is also necessary for individual models, and it can be subjective to decide which subset of cases should be removed from the training set if done manually. The proposed ensemble model utilizes multiple individual models on the same set of data and uses constrained linear optimization on the metadata to obtain the optimal weight for each individual model. In addition, the model automatically filters out cases in the training set that are not predicative of future cases based on metadata. We observe that the ensemble method consistently predicts better than or similar to the best performing individual model in every challenging situation. With improved robustness, the proposed regression method potentially enables end users to build site-specific, physician-specific, or even planner specific models, without manually screening the training cases. This eventually will allow each practice to build models that accurately reflect their own optimal OAR sparing preference and capability, thereby eliminating the need for a universal model. Figure 8 shows an example of improved prediction accuracy of the proposed method, compared with other individual models. In this case, stepwise and ridge perform poorly while lasso and elastic net perform reasonably well, and the ensemble model outperforms all individual models. Note that in different situations, different models perform well, and the proposed model performs most consistently. Improved DVH prediction accuracy usually results in better plan optimization guidance (i.e., optimization constraint generation), since it provides the treatment planning system correct information of the best achievable OAR sparing without compromising PTV coverage. Building models for different treatment sites may face different challenges. For example, the number of cases required to train a model may be different. The more complex head and neck cases require more training cases to well represent the case population, while prostate cases have fewer OARs and are generally easier to train. Second, different treatment strategies are often used to treat different sites. For example, some sites require multiple PTVs while other sites require hard constraints. Last but not least, the amount of intrinsic variance in head and neck cases are more than that of prostate cases due to potential trade-off considerations. As a result, dataset characteristics vary from treatment site to treatment site and individual model performances vary correspondingly. The ensemble model ensures the best performing model gets the highest weighting. All in all, each treatment site should be treated differently in KBP to get the best possible prediction accuracy, and the ensemble model helps to reduce the amount of effort required in terms of model selection. Ideally, the ensemble method should be trained for each treatment site, since data characteristics change from dataset to dataset. However, if there are two datasets from two treatment sites with very similar characteristics, such as DVH variability, number of cases, then it is possible to re-use the model weight αk * directly. The main limitation of the proposed approach is the training time. Two major components of knowledge-based modeling are feature extraction and model training. The feature extraction part of the proposed model takes on average 5 s for each case, and feature extraction is done only once. Model training takes less than 10 s for each individual model. In the proposed model, individual model training is repeated by the number of component models times the number of in-model cross validation. As a result, in our hardware setup, it takes less than 10 min to run a single regression model, and it takes 30 min to run a 20-fold cross-validated ensemble model. The prediction procedure is very simple and takes less than 1 s to calculate Therefore, once a model is calculated, it can be easily stored and applied to DVH predictions. Possible future research topics include the optimal selection of models as well as the optimal number of models in the ensemble. In this study, we limit the number of models included in the training set to avoid overfitting. While too many models in the ensemble warrant overfitting the data, the current number of models (9) is very conservative. With the regulation of the nonnegative constraint, the proposed approach could potentially see further performance improvements if more models are included in the ensemble. We expect the optimal number of models in the ensemble to be dependent of the size of the dataset. In addition, the proposed methodology can be easily expanded to more complicated non-linear models. We use linear models in the ensemble due to the limitations of training dataset size. As more cases become available, more complicated models become viable. aUThOr cOnTriBUTiOns JZ proposed the model, conducted experiments, and wrote the first draft of the manuscript. QW oversaw the workflow of the study and contributed in the clinical aspect of the study. TX extracted and pre-processed data for the experiments in the paper. YS provided suggestions regarding the study design. F-FY provided critics in the experimental design. YG contributed advice in the statistical methods and revised the manuscript.
2018-03-19T17:21:24.315Z
2018-03-19T00:00:00.000
{ "year": 2018, "sha1": "333fbcff59ce1eebd974cd4cf20cdefbb1921fd4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2018.00057/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "333fbcff59ce1eebd974cd4cf20cdefbb1921fd4", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
256891948
pes2o/s2orc
v3-fos-license
Development of a Daily Living Self-Efficacy Scale for Older Adults in Japan Objectives: Older adults tend to experience decreased enjoyment and fulfillment in life, social interactions, and independent living, with aging. These situations often result in lower levels of daily living self-efficacy in activities, which is one of the factors resulting in a decline in the quality of life (QOL) among older individuals. For this reason, interventions that help maintain daily living self-efficacy among older adults may also help maintain a good QOL. The objective of this study was to develop a daily living self-efficacy scale for the elderly that can be used to evaluate the effects of interventions aimed at enhancing self-efficacy. Methods: An expert meeting involving specialists in dementia treatment and care was held, to prepare a draft for a daily living self-efficacy scale. In the meeting, previous studies on self-efficacy among older adults, which were collected in advance, were reviewed, and the experiences of the specialists were discussed. Based on the reviews and discussions, a draft of a daily living self-efficacy scale comprising 35 items was prepared. This study on daily living self-efficacy was conducted from January 2021 to October 2021. The internal consistency and concept validity of the scale were evaluated based on the assessment data. Results: The mean age ± standard deviation of the 109 participants was 84.2 ± 7.3 years. The following five factors were extracted based on factor analysis: Factor 1, “Having peace of mind”; Factor 2, “Maintaining healthy routines and social roles”; Factor 3, “Taking personal care of oneself”; Factor 4, “Rising to the challenge”; and Factor 5, “Valuing enjoyment and relationships with others”. The Cronbach’s alpha coefficient exceeded 0.7, thereby suggesting sufficiently high internal consistency. Covariance structure analysis confirmed sufficiently high concept validity. Conclusions: The scale developed in this study was confirmed to be sufficiently reliable and valid, and when used during dementia treatment and care to assess the levels of daily living self-efficacy among older adults, it is expected to contribute to the improvement of QOL among older adults. Introduction Various mental and physical changes occur with aging, such as decreased cognitive function and an increased risk of frailty. As minor setbacks in daily life increase, older adults may become depressed more easily or even lose their zest for life. The scope and variety of their activities are narrowed down, possibly leading to declines in mental and physical function as a result of disuse. Specifically, impaired cognitive function among the elderly can result in feelings of despair and resignation, as well as lower levels of self-efficacy. Moreover, when older adults interact less with others, they tend to feel less certain about their existence and often become less motivated or self-confident [1]. Bandura defined self-efficacy as an individual's belief in his or her capacity to execute necessary behaviors [2]. Based on Bandura's theory of self-efficacy, Tinetti et al. developed the Fall Efficacy Scale (FES) to evaluate an individual's confidence in his or her ability to avoid falls while performing activities, which included 10 items on the activities of daily living (ADL) and instrumental activities of daily living (IADL) [3]. Hill et al. developed a revised version of the FES comprising 14 items [4]. The FES is an evaluation index for the prediction of falls, and impaired mental and physical function that may occur in the future [5]. Referring to the FES, Suzuki et al. [6] developed a self-efficacy scale related to falls and attempted to evaluate self-efficacy in regard to ADL [7]. To maintain the quality of life (QOL) of older adults at a certain level, autonomy and well-being are important [8]. In addition, for older adults at risk of requiring nursing care, and for those suffering from dementia in particular, the maintenance of self-efficacy in daily living is considered essential for the maintenance of QOL. It is crucial for older adults to continue having confidence in their ability to conduct ADL. It was predicted that if older adults continue or regain confidence in their lives, the progression of dementia would be mitigated or prevented. The maintenance of self-efficacy can result not only in the maintenance of mental and physical functions, but also in the recovery of autonomy as well as the prolongation of life expectancy [9]. It is essential to maintain and enhance the life functions of older adults that improve their levels of daily living self-efficacy, such as enjoyment and fulfillment in life, social interactions, and independent living. However, to the best of our knowledge, no scale has been developed for measuring daily living self-efficacy levels among older adults in general. If daily living self-efficacy among older adults in general can be measured, the impact of dementia treatment and care on the levels of self-efficacy among older adults can be evaluated. Medical treatment and care approaches that can enhance self-efficacy levels among older adults can also improve their QOL. Therefore, this study aimed to develop a daily living self-efficacy scale for older adults, including those at risk of requiring nursing care, and to verify its reliability and validity. Development of a Daily Living Self-Efficacy Scale for Older Adults From September 2020 to November 2020, the authors performed a review of the literature on studies on existing scales related to self-efficacy among older adults and patients with chronic diseases [6,7], as well as self-efficacy in general. Daily living was defined as the enjoyment of and fulfillment in life, social interactions, forgetfulness, activities, and independence in daily living among older adults. Based on the results, items that could potentially be included in a self-efficacy scale were extracted and rephrased into words and expressions that were easier for older adults to understand. In this study, self-efficacy was defined in accordance with Bandura's definition, as "the level of an individual's belief in his or her capacity to execute necessary behaviors well", to prepare the draft of the daily living self-efficacy scale in general [2]. As interventions intended for the recovery of self-confidence in and motivation for daily living are being provided in medical practice and in the care of older adults, measuring the effects of these interventions is expected to be one of the main uses of the scale. An expert meeting involving physicians, mental health social workers, certified dementia nurses, and certified geriatric nurse specialists was held. The items that constitute the daily living self-efficacy scale and the adequacy of expressions in the scale were discussed, and a pilot version comprising 35 items was prepared. Using the scale involved having the participants respond to each question in regard to how confident they were for each item by pointing their finger at one of the four choices, from "4: Very confident" to "1: Not confident". The examiner read the question and answer choices aloud, while showing the participants the sheet with the choices displayed. Prior to the data collection process, 10 older persons participating in a care prevention project were tested using the pilot version of the scale to assess whether they could answer each question without difficulty. Older patients with Alzheimer-type dementia, older patients with mild cognitive impairment, and patients with early-onset dementia (n = 3) were included, so that the scale developed in this study could be used for older patients with impaired cognitive function. Impressions or comments regarding the ease of answering each question were collected from those who were tested, and the items of the scale were reorganized accordingly, thereby resulting in a total of 31 items. Selection Criteria of Participants In addition to the attendants of care prevention classes, older individuals in need of nursing care attending day-care services were included as participants, because the scale being developed was intended not only for those at risk of requiring nursing care, i.e., those with frailty, but also those currently in need of nursing care. Only the individuals who provided consent to participation in the study were examined. The selection criteria included older adults who could respond to the interview questions in the survey, those who attended day-service or care prevention classes, and those who provided consent to participate in the study. Study Period and Procedures In this study, the data were collected, by nurses with at least five years of experience in geriatric nursing, from January 2021 to October 2021. The participants involved in this study were older adults with a Mini-Mental State Examination (MMSE) score of 13 points or higher, and who had answered at least two of the three preliminary questions correctly. Referring to the development of the Japanese version of the dementia quality of life instrument (DQOL) [10], the preliminary questions were set to determine in advance whether the participants could understand the questions in this survey and select answers from the choices presented. Individuals meeting these criteria were included in the survey regardless of whether they had been diagnosed with dementia. In this study, the participants included older adults with dementia and older adults who were still able to respond appropriately to the questionnaire items, even though their cognitive function was declining. This is because it is important to measure and improve the daily living self-efficacy of older adults diagnosed with dementia or who are beginning to experience cognitive decline. In addition to the self-efficacy scale for older adults, the MMSE and a subjective QOL scale were used. Data on the attributes of the participants were obtained from the attendance records of care prevention classes or day-care services. The evaluation of their daily activities and an assessment using the Gottfries-Brane-Steen scale (GBS) were performed based on the information obtained from the staff in charge. Evaluation Characteristics of the Participants Data on the participants' gender, age, underlying conditions for dementia, and physical complications, were obtained from the above-mentioned attendance records. Mini-Mental State Examination (MMSE) The MMSE is a screening examination for cognitive function. It comprises the following subscales: orientation in time (5 points) and place (5 points), registration (3 points), attention and calculation (5 points), recall (3 points), language (8 points), and copying (1 point) [12]. In this study, the total MMSE score was used for the statistical analysis. Lower scores indicated decreased cognitive function. Subjective QOL Scale for Older Patients with Dementia The dementia quality of life instrument (DQOL), which is a subjective QOL scale specific to older patients with dementia developed by Brod et al. [15], comprises five subscales, as follows: "Self-esteem", "Positive affect/humor", "Negative affect", "Feelings of belonging", and "Sense of aesthetics". The Japanese version has been evaluated in terms of reliability and validity [10,15], and can be used for the measurement of subjective QOL. For example, "Positive affect/humor" represents positive emotions, including humor, having fun, and being full of energy; "Sense of aesthetics" represents the sense of being conscious about and enjoying music, animals, or nature, among others. Ethical Considerations The outline for the study was described in the questionnaire. The survey was conducted anonymously to avoid personal identification. The protection of privacy and the presentation of study data at academic conferences were explained in writing. The individuals who provided written consent were included in the study. This study was approved by the Ethics Committee of Hamamatsu University School of Medicine (No. 20-284). For the participation of an individual certified as requiring nursing care, consent was obtained from a family member, as well as the individual him/herself. Statistical Analysis The participants that provided missing values in the survey items were excluded from the data. Mean values and standard deviations (SDs) were calculated for each item. Item-total (IT) correlation analyses were performed, and the correlation coefficient of the total score and each item was confirmed. An exploratory factor analysis (principal factor method with varimax rotation) was also performed, and the items with high factor loading for two factors were removed to determine the factors. After determining the factors, with regard to reliability, the Cronbach's alpha coefficient was calculated to assess internal consistency. For the concordance rate, the Pearson correlation coefficient was used. For construct validity, a covariance structure model was tested for goodness of fit (GFI). For validity in terms of the relationship with other scales, correlation coefficients with ADL, MMSE, GBS, and the Japanese version of the DQOL were calculated. IBM SPSS statistics and AMOS version 21 (IBM, Chicago, IL, USA) were used for all statistical analyses. Participants' Characteristics The total number of participants involved in this study included 185 older adults participating in two-day services and two care prevention classes. Of the 149 older adults (80.5%) who provided consent, 127 (68.64%) met the inclusion criteria, which were an MMSE score of 13 or higher, and correct answers to two of the three preliminary questions. The participants that provided missing data were excluded from the data analysis, and as a result, the final number of participants was 109 (58.9%) older adults. Twenty-five participants (22.9%) were male and 84 (77.1%) were female ( Table 1). The mean age of the participants was 84-years old (Mean 84.2; SD = 7.3). Sixty-one participants (56.0%) were involved in care prevention programs, and 48 (44.0%) were users of day-care services. Based on family configuration, the largest number of participants (55 participants, 50.5%) lived with their children, followed by those who lived alone (23 participants, 21.1%). The nursing care levels determined by the public long-term care system in Japan (from 1 to 5, based on an assessment of the care requirement) was "independent" for 55 participants (50.5%) and "long-term care level-1" for 25 participants (22.9%). Thirteen participants (11.9%) had been diagnosed with dementia. The most prevalent type of dementia was Alzheimer-type (five participants, 38.5%). The most frequently observed past or current disease was motor function disorder (44 participants, 40.4%), followed by circulatory disorder (35 participants, 32.1%). The most common physical function disorder was gait disturbance (46 participants, 42.2%), followed by hearing impairment (38 participants, 34.9%). The mean values of each scale are listed in Table 2. The mean ± SD ADL (Katz) and MMSE values were 6.8 ± 2.1 and 25.2 ± 4.5, respectively. The GBS item with the highest mean score was "B: Intellectual impairment" (3.98 ± 8.63), and the DQOL item with the highest mean score was "Negative affect" (43.3 ± 6.4). Mean Values of and Reliability Analysis of IT Correlation Coefficients and Test-Retest Reliability The mean values and IT correlation coefficients of the items included in the daily living self-efficacy scale are listed in Table 3. Among these items, the mean value for "Having someone to rely on during an emergency" was the highest (3.76), followed by that for "Taking daily medications" (3.73). All the IT correlations were significant, ranging from 0.300 to 0.650. The test-retest reliability of each item after one week generated a score of over 0.827. Verification of Reliability Using Cronbach's Alpha and Test-Retest Reliability Firstly, exploratory analysis using the Shapiro-Wilk test of normality was used to confirm the normal distribution of data, after which factor analysis was conducted. The data obtained from exploratory factor analysis, including the results of the factor analysis and the Cronbach's alpha for the daily living self-efficacy scale for older adults are listed in Table 3. The test-retest reliability after one week generated a score of 0.927. Verification of Construct Validity on Factor Analysis as Exploratory Analysis Based on the results of the factor analysis, the items with a factor loading below 0.4 and those with high factor loading for two factors were deleted, and the following five factors involving 23 items were selected: Factor 1, "Having peace of mind"; Factor 2, "Maintaining healthy routines and social roles"; Factor 3, "Taking personal care of oneself"; Factor 4, "Rising to the challenge"; and Factor 5, "Valuing enjoyment and relationships with others". The Cronbach's alpha was 0.72. The final version comprises 23 items. The eight items deleted from the initial 31 items include: living peacefully every day, having a purpose in life, talking to people by oneself, talking to people who have trouble communicating with me until they understand me, helping people in need, asking for help when in trouble, going to the bathroom, and taking a bath (Appendix A). Verification of Construct Validity and Covariance Structure Analysis The results of the covariance structure analysis, which was performed as a confirmatory factor analysis to evaluate concept validity, are shown in Figure 1. The five factors extracted in the exploratory factor analysis were used as latent variables. A model assuming covariance among the latent variables was set up, and covariance structure analysis was conducted. The model with the best fit was selected. In Figure 1, e1 to e28 indicate the error variables that are not reflected in the model. All standardized coefficients were significant. The daily living self-efficacy scale model comprised five factors (χ 2 = 42.162, p = 0.377). The GFI indices were as follows: GFI = 0.957, adjusted GFI = 0.952, and the root-mean-square error of approximation = 0.072. Thus, the GFI was statistically verified. The path coefficients between the latent and observed variables were significant and ranged from 0.466 to 0.942. Correlation Coefficients of Age, ADL, and MMSE No significant correlation was found between age and each factor of the self-efficacy scale. The Katz ADL showed significant negative correlations with Factor 2, "Maintaining healthy routines and social roles" (correlation coefficient, −0.202) and Factor 3, "Taking personal care of oneself" (correlation coefficient, −0.356). The MMSE showed significant positive correlations with Factor 2, "Maintaining healthy routines and social roles" (correlation coefficient, 0.243) and Factor 3, "Taking personal care of oneself" (correlation coefficient, 0.308). Verification of Convergent Validity on the DQOL of "Sense of Aesthetics" Regarding the Japanese version of the DQOL, "Sense of aesthetics" also showed significant positive correlations with all five factors of the self-efficacy scale. "Self-esteem" showed significant positive correlations with three factors: Factors 1, 2, and 5 (correlation coefficients: 0.210-0.396). Verification of Discriminant Validity Among the items of the GBS, "A: Motor function" showed significant negative correlations with three factors, from Factor 2 to Factor 5 of the daily living self-efficacy scale (correlation coefficients, −0.193 to −0.352), and was not significantly correlated (p = 0.059), or negatively correlated with Factor 1 of the daily living self-efficacy scale. Verification of Concurrent Validity on the DQOL of "Positive Affect/Humor" The correlation coefficients between the self-efficacy scale for older adults or age, and other scales, are listed in Table 4. The Japanese version of the DQOL, "Positive affect/humor" showed significant positive correlations with all five factors of the self-efficacy scale. "Self-esteem" showed significant positive correlations with three factors: Factors 1, 2, and 5 (correlation coefficients; 0.210-0.396). Correlation Coefficients of Age, ADL, and MMSE No significant correlation was found between age and each factor of the self-efficacy scale. The Katz ADL showed significant negative correlations with Factor 2, "Maintaining healthy routines and social roles" (correlation coefficient, −0.202) and Factor 3, "Taking personal care of oneself" (correlation coefficient, −0.356). The MMSE showed significant positive correlations with Factor 2, "Maintaining healthy routines and social roles" (correlation coefficient, 0.243) and Factor 3, "Taking personal care of oneself" (correlation coefficient, 0.308). Verification of Convergent Validity on the DQOL of "Sense of Aesthetics" Regarding the Japanese version of the DQOL, "Sense of aesthetics" also showed significant positive correlations with all five factors of the self-efficacy scale. "Self-esteem" showed significant positive correlations with three factors: Factors 1, 2, and 5 (correlation coefficients: 0.210-0.396). Verification of Discriminant Validity Among the items of the GBS, "A: Motor function" showed significant negative correlations with three factors, from Factor 2 to Factor 5 of the daily living self-efficacy scale (correlation coefficients, −0.193 to −0.352), and was not significantly correlated (p = 0.059), or negatively correlated with Factor 1 of the daily living self-efficacy scale. Discussion In this study, we developed a scale for measuring daily living self-efficacy among older adults. The reliability and validity of the scale were assessed among older adults at risk of requiring nursing care (participants of care prevention projects) and those currently receiving nursing care (covered by nursing care insurance or certified as requiring nursing care). Participants' Characteristics The cognitive functions of the participants were assessed using the MMSE, and the mean score was 25.2 points. The levels of ADL were assessed using the Katz ADL scale, and the mean score was 6.8 points. These results indicate that many participants were independent when performing ADL. The level of necessity of nursing care was "independent" among approximately half of the participants, and among most of those requiring nursing care, the level of necessity was low ("Nursing care level 1"). A few participants requiring high levels of care ("Nursing care level 4" and "Nursing care level 5") had severe physical dysfunctions, such as the sequelae of stroke, but they were eligible to participate in the survey. Deleted Items in the Daily Living Self-Efficacy Scale for Older Adults Items in the self-efficacy scale for older adults were carefully discussed between the co-researchers (gerontology specialists), based on eight items with low factor loadings in the factor analysis. Based on these discussions, eight items were deleted, resulting in 23. The following items were difficult to understand and too challenging for the participants and were, therefore, removed: "living peacefully every day", "having a purpose in life", "talking to people by oneself", "talking to people who have trouble communicating with me until they understand me", "helping people in need", and "asking for help when in trouble". Furthermore, the items "going to the bathroom" and "taking a bath" were deleted because these items focused on ADL performance. The deletion of items was reviewed by the experts involved in the development of the scale, and it was concluded that the content of the scale was further refined as a measurement of daily living self-efficacy. Construct Validity Exploratory Factor Analysis The factor analysis of the data collected using the daily living self-efficacy scale developed in this study demonstrated a five-factor structure comprising the following factors: Factor 1, "Having peace of mind", Factor 2, "Maintaining healthy routines and social roles", Factor 3, "Taking personal care of oneself", Factor 4, "Rising to the challenge", and Factor 5, "Valuing enjoyment and relationships with others". Factor 1, "Having peace of mind", comprised the following items: "Enjoying conversations with friends and family", "Enjoying spending time with friends and family", and "Having someone to rely on during an emergency", which indicate relationships with people the participants can trust, and "Having a good life" as well as "Feeling fulfilled every day", which indicate a positive evaluation of the self, with five items in total. Lawton pointed out the importance of QOL and psychological well-being among older adults [16], and developed positive affect (PA), which can be used for the evaluation of enjoyable activities and interactions with others [17]. Lawton also reported that PA affected QOL. Lawton's study suggested the adequacy of the fact that the five items included in "Having peace of mind" were established as Factor 1 in this study. Factor 2, "Maintaining healthy routines and social roles", included the following items: "Going out at least once a week" and "Taking daily medications", which indicate daily routines, as well as "Helping others". Older adults are often recipients of nursing care, with their family members being the providers. However, among older adults, their engagement in activities that help others may result in the enhanced maintenance of equal relationships with others and possibly, self-efficacy. Rabins and Kasper developed Alzheimer's Disease-Related Quality of Life (AD-QOL) as a health-related QOL scale specific to dementia, and they included "social interaction" and "relationship with surroundings" as QOL domains [18]. Maintaining certain relationships with others can result in the maintenance of social roles. Additionally, items related to the autonomy of older adults, such as "Accomplishing important tasks to the end" would be associated with social roles. As for Factor 3, "Taking personal care of oneself" represents self-confidence in "Buying daily necessities" and "Withdrawing money from banks and post offices". Older adults are highly likely to experience interferences with daily living because of age-related physical and mental changes. Suzuki et al. attempted to measure self-efficacy in ADL, and they reported that the maintenance of functions for ADL, including IADL, was critical for the maintenance of selfefficacy among older adults living in local communities [6]. Factor 4, "Rising to the challenge", comprised two items: "Snapping out of it when depressed" and "Remaining positive despite failure", which are closely related to self-efficacy among older adults. Factor 5, "Valuing enjoyment and relationships with others", included various items, such as "Doing what I love and having fun" and "Being energetic and feeling good". Yamamoto-Mitani et al. raised control of emotion as a subscale of the QOL scale for older patients suffering from dementia [19], and Perach et al. pointed out that the control of emotions is crucial in decision making among older patients suffering from dementia [20]. These findings suggest that among older adults with declining cognitive functions, it is crucial to maintain a good emotional status by valuing enjoyment and relationships with others to ensure the maintenance of self-efficacy. An examination of the construct validity of the scale proposed in this study showed that both Factor 2, "Maintaining healthy routines and social roles", and Factor 3, "Taking personal care of oneself", were significantly associated with ADL and MMSE. Conn pointed out that a decline in physical function with increased age is a factor influencing decreased levels of self-efficacy [21]. Similar results were obtained in this study with regard to the impact of physical and cognitive functions. Moreover, because Factor 2 includes an item related to social roles, and Factor 3 includes an item on self-confidence in IADL, these factors may influence judgment in social life, IADL, the characteristics of actions, and the characteristics of behaviors, such as cognitive function. Reliability of Cronbach's Alpha and Test-Retest Regarding reliability, Cronbach's alpha values showed scores of 0.7 or higher for individual factors, thereby indicating sufficient internal consistency. Concerning the confirmatory factor analysis, the criteria for the GFI indices were met, thereby confirming the validity of the factor structure involving five factors and 23 items. Although the participants involved in this study included 13 older adults who were diagnosed with dementia, the test-retest reliability after one week generated a score of 0.927, and the reliability of the scale was obtained satisfactorily. Concurrent and Convergent Validity All the subscale items of the Japanese version of the DQOL, "Positive affect/humor" were significantly associated with each factor of the self-efficacy scale. Concurrent Validity In this study, daily living self-efficacy is linked to feelings of "Positive affect/humor" as a test for concurrent validity. Self-efficacy in ADL result in feelings of self-affirmation. In this study, "Positive affect/humor" was used to vivificate concurrent validity. All the subscale items in the Japanese version of the DQOL "Sense of aesthetics" were significantly associated with each factor of the self-efficacy scale. Aesthetics is a component of the QOL that supports daily living self-efficacy, and it was used for the purpose of convergent validity. These items involving positive affect and sense of aesthetics are related to the emotional states raised by Bandura and Cervone [22] as antecedent factors for self-efficacy. Their significant associations with all the items contained in the self-efficacy scale clearly indicates the validity of the self-efficacy scale proposed in this study. Moreover, a significant association was found between the subscale of the Japanese version of the DQOL "Feelings of belonging", and Factor 5, "Valuing enjoyment and relationships with others", thereby further suggesting that the feeling of belonging that motivates older adults to help others, or that they experience when they feel loved by others, is one of the factors supporting self-efficacy. Based on the results mentioned above, the reliability and validity of the self-efficacy scale for older adults were confirmed. This study is expected to be useful for the evaluation of the effects of care interventions intended for the enhancement of self-efficacy among older adults, as well as the accumulation of evidence for the evaluation of relationships between self-efficacy and cognitive/physical functions among the elderly for the purpose of extending life expectancy. Limitations and Directions for Future Research This study involved 109 participants. According to Floyd, for a sample size of 100 participants, five participants per variable would be considered adequate to yield reliable results, whereas 10 participants per variable would be sufficient for a sample size of less than 100 participants [23]. Therefore, a sample size of 138 participants in this study was not sufficient for factor analysis of the exploratory analysis and covariance structure analysis. This study did not obtain enough participants owing to the pandemic caused by the coronavirus disease 2019 . Therefore, future studies on the subject of this study must consider enhanced factor analysis. This study was developed to support older adults with MMSE scores of 13 or higher, participating in day services and care prevention classes, to maintain daily living self-efficacy, even in the face of cognitive decline. Therefore, the participants involved in this study were older adults with mild cognitive impairments, and this aspect may have resulted in biased results. In order to ensure the reliability of the responses, only those participants who were able to respond appropriately were included in this study. Therefore, participants who were suffering from cognitive decline and could not respond appropriately were removed from the study. This is the limitation of the study, and we plan to work on ways to further assess older adults with cognitive decline in regard to self-efficacy in the future. This study involved older individuals attending care prevention classes and day-care services. These participants do not represent the general older population. In the future, to expand the use of the daily living self-efficacy scale proposed in this study, we plan to test this scale among healthy older adults or patients suffering from advanced dementia. Through this scale, we expect to clarify the impact of the treatment and care of older patients suffering from dementia on both self-efficacy and life expectancy. Conclusions The self-efficacy scale developed in this study was confirmed to be partially reliable and valid. The number of participants involved in this study was small, and the sample was limited. As a result, the further verification of reliability and validity is required in associated future studies. This study involved a small number of participants and limited sampling of day services and care prevention classes, and thus, the further verification of reliability and validity is required in future studies. When used to conduct dementia treatment and care to assess the levels of daily living selfefficacy among older adults, the daily living self-efficacy scale proposed in this study is expected to contribute to the improvement of QOL among older patients suffering from dementia.
2023-02-16T16:19:18.009Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "74fa176f8cfc994e8cc20fdaf939e7b14ccdf8f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/4/3292/pdf?version=1676289862", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2466c3cf2257e138c1590747a0804d0f64620d62", "s2fieldsofstudy": [ "Medicine", "Psychology", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
199004005
pes2o/s2orc
v3-fos-license
Study on Data Security Sharing Model Based on Attribute Encryption In view of the problems of key management and access control in traditional encryption technology, this paper proposes a data security sharing model based on attribute encryption. Firstly, we studied the property encryption and data sharing model. Secondly, proposed an attribute encryption algorithm. The user’s private key is generated by the addition homomorphic encryption algorithm, which solves the key hosting problem. This paper proposed the data security sharing algorithm supporting partial decryption, which can reduce the calculation cost of user decryption. An efficient scheme of real-time attribute key cancellation is designed, which not only reduces the communication key, but also protects the forward and backward security of the data. Compared with the other two methods, the proposed method has lower computational and decryption costs. Introduction Cloud computing platform has abundant computing and storage resources, and concentrates a large number of users' private data. At the same time, the large-scale networked systems face more complex security threats than traditional computing systems. The characteristics of cloud computing service computing mode and multi-user sharing usage mode bring new challenges to data security [1][2][3]. In recent years, data security sharing model has attracted more and more attention from scholars. Literature [4] proposes a credible big data sharing model based on blockchain technology and intelligent contract to ensure the safe flow of data resources [4]. Literature [5] propose a secure data sharing scheme in OSNs based on ciphertext-policy attribute-based proxy re-encryption and secret sharing. In order to protect users' sensitive data, our scheme allows users to customize access policies of their data and then outsource encrypted data to the OSNs service provider [5]. Literature [6] proposed the big data security service model of big data providers, users and cloud service providers to realize reliable data security sharing services [6]. Although the above research has been done on data security, the existing data security sharing model based on attributes in the cloud computing platform has some shortcomings: (1) the attribute mechanism can obtain the user's attribute private key and decrypt the data; (2) users need to run complex pairing and exponential operation when decryption; (3) when the key is withdrawn, the user's private key needs to be updated and distributed to the user. In view of the above problems, this paper proposes a secure and efficient data security sharing model based on attribute encryption. Property encryption and data sharing The rapid development of cloud computing allows users to upload data to the cloud computing platform and use the data in the cloud computing platform anytime and anywhere, but the semi-trusted environment of cloud computing also brings serious security and privacy issues [7]. Encryption technology is an effective means to protect data confidentiality. Users encrypt the data before storing it to the cloud server, and the cloud service provider cannot obtain the clear text of user data [8]. Attribute encryption is an encryption technique that supports fine-grained access control. It allows data owners to encrypt content using an access policy, and only users who have properties that satisfy the access policy can decrypt the content [9]. However, the data sharing scheme based on the implementation of attribute encryption has the problem of key hosting, that is, and the attribute organization can calculate the private key of any user's attribute and decrypt the data stored in the cloud computing platform. Property encryption Attribute encryption takes attributes as public keys, associates the ciphertext and user private keys with attributes, and flexibly represents access policies. When the user's private key matches the access policy of the ciphertext, the user can decrypt the ciphertext. Attribute encryption includes both KP-ABE and CP-ABE. In KP-ABE, the user's attribute private key is related to the access policy, and the ciphertext is related to a set of attributes. KP-ABE algorithm mainly consists of the following four algorithms: (1) System initialization algorithm (Setup). Attribute mechanism input security parameter k, output system public key PK and system master key MK. (2) Key generation algorithm (KeyGen). Attribute mechanism inputs system master key MK and access policy T, generating private key SK for the user. (3) Encryption algorithm (Encrypt). User input system public key PK, data plaintext M and attribute set S to generate ciphertext CT. (4) Decrypt algorithm. User input ciphertext CT and private key SK. If the attribute in the ciphertext meets the access policy T, the data plaintext M is decrypted. Since the CP-ABE ciphertext is associated with the access policy, CP-ABE is more suitable for access control in the cloud computing platform. Data sharing model Under the cloud computing platform, cloud service providers can upload, access, and backup and share data in the cloud computing environment through the network, providing users with convenient and fast data storage and sharing services [10]. At present, typical data sharing applications under the cloud computing platform include cloud storage platform, cloud social networking platform and cloud health platform [11]. The data sharing model under the cloud computing platform generally has three main entities: cloud service providers, data owners and users, involving various stages of creation, storage, use, sharing, archiving, key destruction in the data life cycle, as shown in figure 1 [12]. (1) Data upload. The data owner uploads the data to the cloud computing platform and can either expose the Shared data or specify that the user can access and modify the data. (2) Data access. Authorized users can easily access data uploaded by the data owner through the cloud service provider. (3) Data modification. Data owners can re-upload or modify data stored in cloud computing platforms. For authorized users, the data stored in the cloud computing platform can also be modified according to the usage requirements of the data. Since the data owner's data is stored on the cloud server, the data owner does not have complete control over the data, so the user's data must be secured. One of the data security requirements under the cloud computing platform is data confidentiality, which means protecting users' data from being compromised or stolen by semi-trusted cloud server providers while ensuring that only authorized users can access the data. Model conception The model is mainly composed of five parts: attribute mechanism, cloud service provider, key server, data owner and user, as shown in figure 2. (1) Attribute mechanism. Attribute authority is an authoritative attribute management organization responsible for distributing attributes for users, and the joint key server generates attribute private keys for users. In this scheme, we assume that the attribute mechanism is a semi-trusted third party. (2) Key server. The key server is a semi-trusted third party, and the joint attribute mechanism generates the user's attribute private key. In addition, when the user decrypts the data, the key server performs a partial decryption operation for the authorized user, and then returns the partially decrypted ciphertext to the user. (3) Cloud service providers. Cloud service providers provide data storage and download services in the form of networks through dynamic resources. Cloud service providers are also semi-trusted third parties. The owner of the stored data uploads the encrypted data and tries to get clear text. (4) Data owner. The data owner first encrypts the data using the data key, then encrypts the data key using the access policy, and finally uploads the ciphertext to the cloud service provider. (5) Users. When a user downloads data from a cloud service provider, the key server decrypts a portion of the user's encrypted message if the user's properties meet the encrypted data access policy. (1) The Setup (k). Input security parameter k, property agencies generate public key PK A and private key SK A , and servers also generate public key PKK and private key SK K . (2) The KeyGen (). Based on the addition homomorphic encryption algorithm, the property agency joint key server generates the user key SK A and sends it to the user safely. (3) KeyGen. Key server input security parameter r and user attribute set S to generate attribute key SK K . (4) Encrypt (PKA, PKK, M, T). Data owners input the public key PK A of the property organization, the public key PK K of the key server, data plaintext M, access strategy tree T, and output the encrypted data CT. Firstly, based on symmetric encryption algorithm, data plaintext M is encrypted with random data keys, and then DK is encrypted with access strategy tree T based on CP-ABE algorithm. (5) PartDec (CT, SK K ). The key server enters the ciphertext CT and the users attribute key SKK. If the user's attributes meet the ciphertext CT access policy, the output part of the ciphertext CTp decryption. (6) Decrypt (CTp, SKA). User input partially decrypted ciphertext CTP, user key SKA. Decrypt DK, and then use DK to decrypt data plaintext M. (7) ReKey (SK K , a i ). When the attribute authority revokes a user's attribute ai, the key server regenerates the attribute key SK K that has the attribute a i . (8) ReEncrypt (CT, a i ). When the attribute agency revokes a user's attribute a i , the key server reencrypts the ciphertext CT related to the attribute a i , and outputs the heavily encrypted ciphertext CT'. Algorithm description (1) System initialization The attribute mechanism runs the Setup algorithm to construct A order of p bilinear group G 1 , where the generation element of G 1 is denoted by g, and the corresponding bilinear mapping is e (2) Key generation The attribute mechanism and key server first run the KeyGen algorithm and then generate the user key based on the addition homomorphic encryption algorithm. (a) Attribute mechanism generates ( , ) A W Enc PK   and sends it to the key server. (b) The key server selects b randomly to the property organization. (c) Attribute mechanism decrypts V to get X based on addition homomorphic encryption algorithm. (3) Data encryption The data owner runs the Encrypt algorithm, sets the access strategy tree T, encrypts the data M, and outputs the ciphertext CT. First, the data owner selects Suppose Y represents the set of attributes corresponding to leaf nodes in the access strategy tree T, and the ciphertext is constructed as follows: The data owner uploads the cryptoct to the cloud service provider. (4) Data decryption After the user gets the ciphertext from the cloud service provider, sends the decryption request to the key server. The key server runs the PartDec algorithm and decrypts the ciphertext using the attribute key section. The process of decryption is implemented by recursive algorithm, which defines the recursive algorithm DecryptNode (CT, SKK, x), input ciphertext CT, attribute key SKK and node x in the access strategy tree T. If x is a leaf node, define i=attrx. If x is not a leaf node, run the DecryptNode algorithm until the root node: z of the child node of c, run the DecryptNode (CT, SKK, x) algorithm, and the results are saved in Fz. Let S x be the set of any k x node z, and it satisfies the DecryptNode (CT, SKK, x). The calculation is as follows: , (0) Finally, the user USES DK to decrypt data plaintext M: The key server then runs the ReEncyrpt algorithm to reencrypt the ciphertext, randomly select ' p s Z  and update all ciphertext containing the attribute ai as follows: Key refers to the ciphertext that the server reencrypts after the revocation of the unrevocation user can still run the PartDec and the Decrypt algorithm to Decrypt the reencrypting ciphertext CT ' . If the attributes of the unrevoked user satisfy the access policy tree T, the calculation is as follows: The user then decrypts the DK without revoking it as shown in equation (13). Finally, the user USES DK to decrypt the data plaintext M. Safety analysis (1) Data confidentiality First, under the difficult assumption of DBDH, the data security sharing model based on attribute encryption is IND-CPA security. The data owner encrypts the data based on the CP-ABE algorithm before uploading the ciphertext CT to the cloud service provider. Since the symmetric key is a random number, if the user's attribute cannot meet the access policy T of the ciphertext, ( , ) s e g g   cannot be restored and DK can be decrypted. The key server helps the user to partially decrypt the ciphertext and cannot decrypt the DK without the user key SKA. In addition, the attribute mechanism and key server generate the user's attribute private key based on the addition homomorphic encryption algorithm. The property organization cannot know  in the user's key, and the key server does not know  in the user's key. Therefore, neither the attribute mechanism nor the key server can generate the user key separately. , and therefore cannot decrypt DK. (3) Forward and backward security When the user revokes the attribute, the cloud service provider USES ' s to reencrypt the ciphertext related to the attribute. The attribute institution uses  to update the attribute key corresponding to the user, which ensures the forward security of the data. When the new user owns the attribute, the attribute institution updates the attribute key of the user who also owns the attribute. Meanwhile, the cloud service provider USES ' s to rescript the ciphertext related to the attribute, which ensures the backward security of the data. Experimental comparison The proposed model is compared with other models in the calculation and communication performance, and the results are shown in table 1. Where, S represents the number of attributes of the user, N represents the number of attributes in the access policy, R represents the number of users who have not revoked attributes, C1 represents the size of elements within G1, Cp represents the size of elements within Zp, P represents the pairing operation, E1 represents the exponential operation within G1 and E2 represents the exponential operation within G2, and other operations can be ignored. Through experimental calculation and analysis of the model in this section when users decrypt data calculation overhead. The experimental environment is 32-bit Ubuntu12 system, the main frequency is 2.53ghz, the memory is 2GB, and the open source PBC library is used to implement attribute encryption algorithm PBC library is built on the well-known open source mathematical operation library GMP, providing a variety of operations based on bilinear pairing. In the experiment, the number of attributes owned by the user is set as 100, and the calculation time of the model in this section and the model of Hur et al. at the time of user data decryption is calculated for different number of attributes contained in the access strategy, as shown in figure 3. The model in this paper is compared with the existing cloud computing platform data security sharing model in terms of key hosting, partial decryption, forward security, and backward security, and the results are shown in table 2. The key to update Hur's method [14] tree (2N+1)E 1 +E 2 (S+2N+2)P+2SE 1 +NE 2 RC 1 Yang's method [15] LSSS (3N+1)E 1 +E 2 E 2 RC 1 this paper's method tree (2N+1)E 1 +E 2 P 0 [14] two -way security calculation No Yes Yes Yang's method [15] No Yes Yes Yes this paper's method add homomorphic encryption Yes Yes Yes Results analysis The comparison results show that in the model in this section, the user has obvious advantages in data decryption, only one pairing operation is needed, and its calculation cost is independent of the number of attributes in the access strategy. In addition, the model in this section does not require the user to update the property key when the property is revoked, while all other models require additional communication overhead to update the user's property key. Both the models in this section and those of Hur et al have solved the key escrow problem, which can prevent the property agency from using the generated property private key to decrypt the data owner's data. In addition, both the model in this section and the model of Yang et al. allow the key server to partially decrypt the ciphertext, reducing the user's decryption overhead. In addition, the model in this section also implements forward and backward security. Conclusion In this paper, a data security sharing model based on attribute encryption is proposed, and the user's attribute private key is generated by the addition homomorphic encryption algorithm to solve the key hosting problem. At the same time, the model not only supports partial decryption of the key server, reduces the computational overhead of user decryption, but also realizes efficient real-time attribute key removal. This paper realizes: (1) Using the addition homomorphic encryption algorithm to generate the user's attribute private key, solving the key hosting problem. (2) The data security sharing algorithm supporting partial decryption is proposed, which can reduce the computational overhead of user decryption. (3) An efficient scheme of instant attribute key withdrawal is designed, which has less communication key and protects the forward and backward security of data.
2019-08-02T11:23:38.201Z
2019-07-09T00:00:00.000
{ "year": 2019, "sha1": "f7457211596ac6e2c712b19a90966f623e7fc79d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/252/3/032173", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f8799562b5876181160d309cd7a158bfd2e66c59", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
25515907
pes2o/s2orc
v3-fos-license
Relative effectiveness of clinic and home blood pressure monitoring compared with ambulatory blood pressure monitoring in diagnosis of hypertension: systematic review Objective To determine the relative accuracy of clinic measurements and home blood pressure monitoring compared with ambulatory blood pressure monitoring as a reference standard for the diagnosis of hypertension. Design Systematic review with meta-analysis with hierarchical summary receiver operating characteristic models. Methodological quality was appraised, including evidence of validation of blood pressure measurement equipment. Data sources Medline (from 1966), Embase (from 1980), Cochrane Database of Systematic Reviews, DARE, Medion, ARIF, and TRIP up to May 2010. Eligibility criteria for selecting studies Eligible studies examined diagnosis of hypertension in adults of all ages using home and/or clinic blood pressure measurement compared with those made using ambulatory monitoring that clearly defined thresholds to diagnose hypertension. Results The 20 eligible studies used various thresholds for the diagnosis of hypertension, and only seven studies (clinic) and three studies (home) could be directly compared with ambulatory monitoring. Compared with ambulatory monitoring thresholds of 135/85 mm Hg, clinic measurements over 140/90 mm Hg had mean sensitivity and specificity of 74.6% (95% confidence interval 60.7% to 84.8%) and 74.6% (47.9% to 90.4%), respectively, whereas home measurements over 135/85 mm Hg had mean sensitivity and specificity of 85.7% (78.0% to 91.0%) and 62.4% (48.0% to 75.0%). Conclusions Neither clinic nor home measurement had sufficient sensitivity or specificity to be recommended as a single diagnostic test. If ambulatory monitoring is taken as the reference standard, then treatment decisions based on clinic or home blood pressure alone might result in substantial overdiagnosis. Ambulatory monitoring before the start of lifelong drug treatment might lead to more appropriate targeting of treatment, particularly around the diagnostic threshold. Introduction High blood pressure is a key risk factor for the development of cardiovascular disease 1 and is a major cause of morbidity and mortality worldwide. 2 Hypertension is the commonest chronic disorder seen in primary care, with around one in eight of all people receiving antihypertensive treatment. 3 4 Initial management of hypertension conventionally requires a diagnosis based on several clinic or office blood pressure measurements. [5][6][7] National and international guidelines recommend similar strategies, although the thresholds of blood pressure for diagnosis and risk vary. [5][6][7][8][9] Ambulatory blood pressure monitoring, however, estimates "true" mean blood pressure more accurately than clinic measurement because multiple readings are taken; it has also been shown to have better correlation with a range of cardiovascular outcomes and end organ damage. [10][11][12][13][14][15] Ambulatory blood pressure monitoring is typically used when there is uncertainty in diagnosis, resistance to treatment, irregular or diurnal variation, or concerns about variability and the "white coat" effect. 16 17 18 It has therefore arguably become the reference standard for the diagnosis of hypertension. Home blood pressure monitoring, which provides multiple readings over several days, is also better correlated with end organ damage than clinic measurement. 19 20 It seems to be a better prognostic indicator with respect to stroke and cardiovascular mortality [21][22][23] and can identify white coat and masked hypertension. It could provide an appropriate alternative to ambulatory monitoring in terms of diagnosis, particularly in primary care where it might not be immediately available or deemed too costly or when patients find it inconvenient or uncomfortable. Home monitoring has a smaller evidence base than ambulatory monitoring but has gained acceptance over recent years as data accumulate and accurate equipment becomes more widely available. 24 25 If guidelines are to retain clinic measurement as a standard diagnostic tool, it is important to assess these in the light of ambulatory measurement. Similarly, for home measurements to be considered as an alternative to ambulatory measurements then their test performance needs to be evaluated. We conducted a systematic review of the test performance of the diagnosis of hypertension by clinic measurement and home monitoring compared with the reference standard of ambulatory monitoring. Inclusion criteria We had various criteria for inclusion. Types of study-Studies had to have extractable data for diagnoses of hypertension made with home and/or clinic blood pressure measurement compared with those made with ambulatory measurement. There was no restriction on language or year of publication. Types of participants in studies-We included adult patients of all ages. Studies were excluded if participants were pregnant, in hospital, or receiving treatment at the time of the comparison, unless these groups could be excluded from other data within a paper. Although we aimed to derive data relevant to primary care, no restriction was placed on setting other than excluding patients in hospital. Types of outcome measures-We extracted data into 2×2 tables for comparisons of the diagnosis of hypertension provided that clearly defined thresholds for the diagnosis of hypertension were used. Studies from which 2×2 tables could not be derived were excluded. Reference standard-We chose ambulatory monitoring as the reference standard, with 135/85 mm Hg as the internationally accepted threshold for diagnosis on mean daytime readings. 7 Among the various indirect methods of measuring blood pressure, ambulatory monitoring shows the strongest relation with clinical outcome and estimates blood pressure more accurately because multiple readings are taken. [10][11][12][13][14][15] It thus represents the most appropriate choice of reference standard. Some studies have suggested that night time average blood pressure is superior to daytime at predicting cardiovascular outcomes, 26 but there is greater consensus over the threshold to use for daytime averages than night time averages. 5 28 To improve sensitivity in the search, 29 however, we combined three separate search strategies using Medline and Embase (the full Medline search strategy is shown in appendix 2 on bmj.com): we combined keywords for hypertension, blood pressure monitoring, outpatient setting, and diagnosis; we limited MeSH terms for hypertension to diagnosis subheading and combined this with keywords for blood pressure monitoring and outpatient setting; and we combined keywords for hypertension, blood pressure monitoring, outpatient setting, and limit using the diagnosis search filter. Selection of studies Two reviewers (JH and RJMcM) independently reviewed the titles and abstracts of articles identified by the search strategy for potential relevance to the research question. After this process, the full papers of potentially eligible papers were assessed. Data management and extraction Two of four reviewers (JH, RJMcM, UM, JM) carried out data extraction from included papers in duplicate (the data extraction form template is in appendix 3 on bmj.com). Differences in data extraction were resolved by consensus. When necessary we contacted the authors of the primary studies to obtain additional information. Assessment of methodological quality We additionally collected information on recognised sources of bias in diagnostic test accuracy studies using a version of the QUADAS (Quality Assessment of Diagnostic Accuracy Studies) checklist, 30 adapted for this study. The box lists the quality criteria considered. Data synthesis We extracted estimates of sensitivity and specificity from each study for all reported threshold combinations of clinic or home measurement and ambulatory measurement. We identified the subset of studies where the combined data shared the common reference threshold (ambulatory office monitoring 135/85 mm Hg) and carried out a meta-analysis using hierarchical summary receiver operating characteristic (HSROC) models that accounted for sampling variability, unexplained heterogeneity, and covariation between sensitivity and specificity. 29 Models were fitted to estimate and compare the sensitivity and specificity for diagnosis of hypertension made at the most common thresholds (140/90 mm Hg for clinic measurement, 135/85 mm Hg for home measurement). Differences between the tests were expressed as relative sensitivities and specificities to ascertain if there was a significant difference in the relative performance of the tests compared with ambulatory measurement. In a final analysis all studies were included to explore the effect of different diagnostic thresholds. Models were fitted with the SAS Metadas code 31 32 and graphics produced with RevMan 5. 33 When there were not enough studies available for fitting, we simplified the full models by assuming a symmetric receiver operating characteristic curve and fitting a fixed rather than random effects model. Sensitivity analyses considered the effect of differing the diagnostic thresholds, as well as assessing test performance in populations with mean clinic blood pressure at or above the diagnostic threshold, to separately consider where study populations had been recruited entirely from a typical screening population (and so excluding any studies where an additional group of normotensive people were included as "controls"). Further analyses were planned with other population characteristics, methodological quality of the studies, and methods of monitoring. Results Our search identified 2914 studies (excluding duplicates), and we reviewed the full text of 115 papers for eligibility (fig 1). Of these, 20 contained extractable data; three were not written in English (one each in French, Spanish, and Dutch). The 20 studies included 5863 individuals with mean age of 48.8 and mean proportion of women of 57%. Table 1 gives details of the population of each study [34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53] and table 2 gives details of their methodological quality. The studies differed markedly in terms of age (mean age ranged from <33 to 60), sex (percentage of men ranged from 16% to 69%), sample size (from 16 to 2370), and whether a primary care or specialist population was used. All the studies had some degree of methodological weakness (or lack of clarity in what was reported): only 11 out of 20 studies used validated devices for all methods of monitoring, and only six provided evidence of blinding of those conducting the monitoring to previous blood pressure results. All studies avoided both partial and differential verification bias (that is, all patients in the studies received the same comparison measurement tests, regardless of initial results); reporting of attrition and selection criteria of participants was good. There was marked diversity between studies in terms of mean baseline blood pressure of the population, number of measurements for clinic (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18), home , and ambulatory monitoring (24-111), period of ambulatory measurement, and blood pressure thresholds used (tables 3 and 4). Similar diversity was seen in the range of sensitivity and specificity values for individual studies (tables 5 and 6). Two studies reported very low specificities: Denolle 37 (specificity 0%) and Elijovich et al 38 (18%), both of which had small sample sizes. The study by Denolle included a total sample of 16 patients, and none was normotensive according to both clinic and ambulatory classifications. In Elijovich et al, only three out of a total sample of 72 patients were normotensive according to both clinic and ambulatory classifications. 38 We pooled studies with the same thresholds for the reference and index tests and included them in a meta-analysis. Eight studies used a threshold of 135/85 mm Hg for ambulatory blood pressure monitoring and 140/90 mm Hg for clinic blood pressure monitoring to diagnose hypertension, 39 43 46 48-52 while three used a threshold of 135/85 mm Hg for both ambulatory and home diagnosis. 34 36 49 One of the clinic comparison studies, 39 however, used the mean of the full 24 hour ambulatory blood pressure monitoring rather than mean of daytime readings and was therefore not comparable with the others. Only one study provided proportions diagnosed as hypertensive using all three methods of blood pressure monitoring. 49 Figure 2 provides forest plots of the sensitivity and specificity of eligible studies, with performance with either home or clinic measurement compared with ambulatory monitoring. Figure 3 provides a summary receiver operating characteristic plot for the seven clinic comparison studies (mean age 47.1; mean proportion of women 57%). Most studies were within the 95% confidence interval of the summary point, 46 48-50 52 or at least close to the receiver operating characteristic curve, 51 showing some consistency in results across the studies. The remaining outlier study had a small sample size compared with the others, had a younger age profile with a lower mean blood pressure, and used an unvalidated monitor for clinic measurements. 39 Figure 4 plots the three home comparison studies (mean age 52.5; mean proportion of women 55%) on a summary receiver operating characteristic plot. Despite having quite different mean blood pressures and settings, two of the three studies were similar in terms of sensitivity and specificity. 36 49 With so few studies in this group, however, we could not plot a confidence interval or assess the statistical homogeneity. A receiver operating characteristic (ROC) curve for a single study shows the relation between the true positive rate (sensitivity) and the false positive rate (100−specificity) for different cut-off points. In a meta-analysis, the points represent different studies, and the fitted summary ROC curves depict trade-offs between sensitivity and specificity that arise because of differences between the studies. Where the studies combined have different thresholds, the pattern might reflect variation with threshold seen in a single study. Where the studies combined share a threshold, the pattern will reflect trade-offs caused by the other differences between the studies. We explored trade-offs between sensitivity and specificity with variation in blood pressure thresholds for home and clinic measurements (table 8). Increases in specificity and decreases in sensitivity with increasing threshold (and the converse for decreasing threshold) were significant for performance home measurements but not significant for the clinic measurements. We could not carry out the planned sensitivity analyses evaluating methodological quality, population characteristics, or monitoring methods because of the small number of included studies. The removal of the outlying study, 39 Sensitivity analysis of clinic comparisons including only those with mean blood pressures close to or above the diagnostic threshold found a sensitivity of 85.6% (81.0% to 89.2%) and specificity of 45.9% (33.0% to 59.3%) for clinic blood pressure. As all three included studies of home monitoring comparisons used a typical general practice screening population with no control group of normotensive people, we did not perform a further sensitivity analysis. Summary of findings This review has shown that neither clinic nor home measurements of blood pressure are sufficiently specific or sensitive in the diagnosis of hypertension. We included 20 studies with 5683 patients that compared different methods of diagnosing hypertension in diverse populations with a range of thresholds applied. In the nine studies that used similar diagnostic thresholds and were included in the meta-analysis (two comparing home with ambulatory measurement only, six comparing clinic with ambulatory measurement only, and one study comparing all three methods), neither clinic nor home measurement could be unequivocally recommended as a single diagnostic test. Clinic measurement, the current reference in most clinical work and guidelines, performed poorly in comparison with ambulatory measurement, and, given that clinic measurements are also least predictive in terms of cardiovascular outcome, this is not reassuring for daily practice. 10-12 16-18 Home monitoring provided better sensitivity and might be suitable for ruling out hypertension given its relative ease of use and availability compared with ambulatory monitoring. In the case of clinic measurement, the removal of studies with a mean blood pressure in the normotensive range reduced specificity still further. This has profound implications for the management of hypertension, suggesting that ambulatory monitoring might lead to more appropriate targeting of treatment rather than starting patients on lifelong antihypertensive treatment on the basis of clinic measurements alone, as currently recommended. 5 In clinical practice, this will be particularly important near the threshold for diagnosis, where most errors in categorisation will occur if ambulatory monitoring is not used. Strengths and limitations of study We used a comprehensive search strategy in multiple databases and all languages and are unlikely to have missed important numbers of relevant papers. While we did apply quality measures, we did not use a total measure of quality assessment to limit included papers as it is recognised that combining different shortcomings can generate distinct magnitudes of bias, even in opposing directions. 29 54 The main weakness of our study is the paucity of data available. Only one study compared all three methods of measurement. Because of a lack of consensus internationally, a plethora of different thresholds was used, which meant that fewer than half of the studies could be combined in the meta-analysis. The planned sensitivity analyses based on methodological quality, population characteristics, and monitoring schedule could not be performed because of the small number of studies and the methodological weaknesses inherent in included studies that would have made interpretation of such a subgroup analysis speculative. The number of measurements used, however, varied between two and 18 for clinic measurements (though only one study used more than six) compared with 18 to 42 for home measurements. These differences will have contributed to the observed heterogeneity and could explain the poor performance of clinic measurements, albeit that this is typical in clinical practice. The mean age of the population in the clinic comparison studies (47.1) was over five years younger than the mean age in the home comparison studies (52.5) and younger than a typical population of patients with hypertension in primary care (mid-60s). 55 It was often not clear whether studies used validated measurement equipment, and even when it was mentioned, several studies provided validation citations on only some of the sphygmomanometers used. Given the shortage of literature on the subject, poor performance of a particular machine might conceivably lead to biased overall conclusions. We included in the meta-analysis only one study that used an unvalidated monitor, 39 and exclusion of this study had a minimal effect on the results. The findings clearly depend on the choice of the reference standard, and the three types of measurement are sufficiently different such that whichever one of them is chosen as the reference, the other two will perform relatively badly. The comparability of the performance of home monitoring to clinic measurement, rather than to ambulatory monitoring as might have been expected, could also reflect a relative paucity of relevant data, as there were only three home comparison studies, with wide confidence intervals for specificity (48% to 75%), with particularly poorly performance for home monitoring. Ambulatory monitoring, while providing the best correlation to outcome of the methods evaluated, nevertheless in general represents a single 24 hour period in an individual's life hence it is important that a "normal" day is chosen, typically a working day. A study of the long term reproducibility of ambulatory measurements taken three times over a two year period found that daytime ambulatory blood pressure provided a reproducible estimate in 54 people with borderline hypertension (correlation coefficient 0.70 for systolic blood pressure). 56 Finally, we cannot consider the implications for clinical practice in terms of the best method of monitoring treatment effects as our research question focused solely on diagnostic studies. Comparisons with other studies We could not find a previous study that combined literature on the diagnosis of hypertension with different methods of measurement. Guidelines to date have tended to recommend the use of clinic measurement with ambulatory blood pressure monitoring and, to a lesser extent, home monitoring as secondary methods in special cases such as white coat hypertension. 5-9 24 Our results suggest that while this is a pragmatic approach supported by the results of treatment studies, more widespread use of ambulatory blood pressure monitoring for the diagnosis of hypertension, particularly around the thresholds, might result in more appropriately targeted treatment. Policy implications The poor specificity of both clinic and home measurement and poor sensitivity of clinic monitoring mean some people will be treated who would be defined as normotensive on the basis of ambulatory blood pressure monitoring. How big a proportion this is of the total number of people labelled as hypertensive will depend on the prevalence of hypertension in the population being studied. This can be seen in the sensitivity analysis where specificity drops as prevalence increases. The positive and negative likelihood ratios were 2.07 and 0.25 for home compared with ambulatory measurement (across three comparison studies), respectively, and 2.94 and 0.34 for clinic compared with ambulatory measurement (across seven comparison studies), respectively. This suggests some correlation between the results of home or clinic measurement and ambulatory monitoring, but the correlation is not strong (positive likelihood ratios of over 10 and negative likelihood ratios of less than 0.1 would indicate a strong relation 57 ). To help interpret this for clinical practice, 58 if the prevalence of hypertension was as low as 10% (for example, in people under 40), then out of every four positive diagnoses provided by clinic measurement, close to three would be incorrect as judged by the reference standard of ambulatory measurement. If half of the population were hypertensive (such as those over 65), this would be reversed, and three out of every four positive diagnoses provided by clinic measurement would be correct with ambulatory measurement. When prevalence is 50%, however, it might be more accurate to use the results of the sensitivity analysis where mean blood pressure in studies was close to or above the diagnostic threshold, and here only 61% of diagnoses after clinic measurements would be correct (table 9). Many people with a current diagnosis of hypertension might not in fact have hypertension. This has important implications, both for the effect of labelling itself on otherwise healthy people [59][60][61][62] and for the cost effectiveness of treatment. 63 Perhaps an approach using clinic (or home) measurements as a screening test followed by ambulatory blood pressure monitoring for blood pressures that are within 10 mm Hg of threshold might be appropriate before definitive treatment but arguably a wider use of ambulatory monitoring would be needed to avoid overtreatment of white coat hypertension as well as detection of masked cases. As we did not have sufficient studies that used a high threshold, we cannot determine the relevance of ambulatory monitoring in people with high clinic readings. White coat hypertension, however, can manifest with very high clinic readings, 64 and, in the absence of a clinical indication for immediate treatment (such as the signs and symptoms of accelerated hypertension 65 ), clinicians might want to organise an urgent ambulatory measurement rather than treat on the basis of limited clinic measurements. Conclusions Our study suggests that if ambulatory blood pressure monitoring is taken as the reference standard for the detection of hypertension, then treatment decisions based on clinic or home blood pressure alone, using thresholds of 140/90 mm Hg, result in substantial overdiagnosis. Ambulatory monitoring might lead to more appropriate targeting of treatment before the start of lifelong drug treatment, particularly around the diagnostic threshold. Considering the relative expense of ambulatory monitoring equipment, cost effectiveness analyses are essential before wholesale changes to the diagnosis of hypertension can be recommended. Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: Not required. Data sharing: Dataset available from the corresponding author at r.j.mcmanus@bham.ac.uk. The dataset includes only anonymised material already in the public domain. What is already known on this topic Hypertension is traditionally diagnosed after measurement of blood pressure in a clinic, but ambulatory and home measurements correlate better with outcome What this study adds Compared with ambulatory monitoring, neither clinic nor home measurements have sufficient sensitivity or specificity to be recommended as a single diagnostic test If the prevalence of hypertension in a screened population was 30%, there would only be a 56% chance that a positive diagnosis with clinic measurement would be correct compared with using ambulatory measurement More widespread use of ambulatory blood pressure for the diagnosis of hypertension would result in more appropriately targeted treatment
2017-08-10T19:43:39.201Z
2011-06-24T00:00:00.000
{ "year": 2011, "sha1": "d4b4d0d1827d3b1d377585abaa82ea08cc3973ae", "oa_license": "CCBYNC", "oa_url": "https://www.bmj.com/content/342/bmj.d3621.full.pdf", "oa_status": "HYBRID", "pdf_src": "BMJ", "pdf_hash": "e0e34b401b1c5489eb2f8c3a06bdb9cb895da7ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249096120
pes2o/s2orc
v3-fos-license
A broad v. focused digital intervention for recurrent binge eating: a randomized controlled non-inferiority trial Background Empirically validated digital interventions for recurrent binge eating typically target numerous hypothesized change mechanisms via the delivery of different modules, skills, and techniques. Emerging evidence suggests that interventions designed to target and isolate one key change mechanism may also produce meaningful change in core symptoms. Although both ‘broad’ and ‘focused’ digital programs have demonstrated efficacy, no study has performed a direct, head-to-head comparison of the two approaches. We addressed this through a randomized non-inferiority trial. Method Participants with recurrent binge eating were randomly assigned to a broad (n = 199) or focused digital intervention (n = 199), or a waitlist (n = 202). The broad program targeted dietary restraint, mood intolerance, and body image disturbances, while the focused program exclusively targeted dietary restraint. Primary outcomes were eating disorder psychopathology and binge eating frequency. Results In intention-to-treat analyses, both intervention groups reported greater improvements in primary and secondary outcomes than the waitlist, which were sustained at an 8-week follow-up. The focused intervention was not inferior to the broad intervention on all but one outcome, but was associated with higher rates of attrition and non-compliance. Conclusion Focused digital interventions that are designed to target one key change mechanism may produce comparable symptom improvements to broader digital interventions, but appear to be associated with lower engagement. Introduction Binge eating is a symptom common across many subthreshold and diagnostic-level eating disorders. Although evidence-based treatment and prevention programs for binge eating exist (Hilbert et al., 2019), there remains a significant gap in the uptake of these services among those in need (Weissman & Rosselli, 2017). The reasons for this service gap include the high cost of mental health services, limited professional availability and lengthy waitlists, geographical constraints, and percieved stigma (Kazdin, Fitzsimmons-Craft, & Wilfley, 2017). If unaddressed, the presence of binge eating can lead to a clinically significant eating disorder or numerous adverse complications (Klump, Bulik, Kaye, Treasure, & Tyson, 2009). Thus, solutions that reduce this service gap are sorely needed. One possible solution is to deliver intervention content through technological mediums, such as the Internet or smartphone apps. Digital interventions are advantageous because they can reach a large number of people at little to no cost, and can be completed at home, anonymously, and at a self-suited pace (Andersson, 2016). While many digital programs require professional guidance, the utility of self-guided digital interventions is becoming more widely recognized. Self-guided digital interventions are not only more disseminable, but technological advancements means that some features that characterize the client-therapist relationship (tailored content delivery, assessment of risk profile etc.) can be mirrored through in-built app functionality, such as conversational agents, anonymous online screening, and just-in-time intervention prompts (Fitzsimmons-Craft et al., 2021;Torous et al., 2021). Despite producing smaller effects than professionally guided programs (Baumeister, Reichler, Munzinger, & Lin, 2014), the demand for self-guided digital interventions is growing among people with eating disorders (Linardon, Messer, Lee, & Rosato, 2021c). While selfguided programs are not the sole solution to the existing service gap, they can broaden the dissemination of evidence-based treatments and help more people than would have otherwise been the case in the absence of any intervention (Torous et al., 2021). Existing digital programs for eating disorders typically involve numerous strategies, techniques, or modules designed to target a range of hypothesized change mechanisms, such as restrictive eating, mood dysregulation, body image concerns, and self-esteem deficits, (de Zwaan et al., 2017;Fitzsimmons-Craft et al., 2020). While these broad, 'multi-target' programs are effective for many, they are also limited in certain ways. Some users may not require a program that targets multiple mechanisms because they do not exhibit some of the problems that are being addressed (e.g. a person that does not experience body image concerns does not need intervention content or strategies designed to alleviate body concerns). Receiving intervention content that is not relevant to a user's symptom profile may lead to issues with motivation, engagement and drop-out (Andersson, Estling, Jakobsson, Cuijpers, & Carlbring, 2011). Recent attention has been devoted toward developing more focused digital intervention formats. One example of this is the 'single session' intervention, which is an online program that incorporates one component of evidence-based treatment, targets one or two key change mechanisms, and requires only one encounter that program (Schleider, Dobias, Sung, Mumper, & Mullarkey, 2020). Single-session interventions are hypothesized to improve the acceptability and accessibility of digital health tools because, unlike multi-session formats, they can minimize engagement burdens on users (as they can be completed in only one sitting). Furthermore, many single session programs are cost-free and publicly accessible, which likely yields far greater reach and public health impact (Schleider et al., 2020). Importantly, single session online mental health interventions can produce effect sizes slightly smaller to multi-session interventions (Schleider & Weisz, 2017b). Another example of a focused digital intervention format is a single-target program. Like a single-session intervention, singletarget interventions are theoretically precise, mechanism-focused programs that addresses only one specific problem hypothesized to underlie an outcome (Linardon et al., 2021b). Such single-target, focused interventions are not typically completed in one sitting because they are multi-step programs that deliver more content and teach a broader range of skills. Even though such focused interventions take longer to complete than single-session interventions, compared to broad programs their degree of specificity may be more relevant to certain users. Further, if a focused intervention targets a mechanism known to underlie most of the effects of treatment, they might be just as beneficial as a broader program that targets numerous hypothesized mechanisms. Evidence supports the efficacy of focused digital interventions for eating disorder symptoms. Multi-step, self-guided digital interventions designed to exclusively target maladaptive perfectionism (Shu et al., 2019) and dietary restraint (Linardon et al., 2021b) have been produced effect sizes comparable to broad programs. However, no study has directly compared a broad and focused program to determine their relative efficacy, as large adequately powered trials are difficult to execute. Establishing their relative efficacy through a non-inferiority trial would have significant implications for the future design, delivery, and dissemination of digital interventions for eating disorders. We conducted a randomized non-inferiority trial comparing a broad to a focused digital intervention for recurrent binge eating. The broad program was designed to target three key binge eating maintaining mechanisms (dietary restraint, mood intolerance, and body image), while the focused program was designed to target one key change mechanism (dietary restraint). Both interventions have demonstrated efficacy (Linardon, Shatte, Rosato, & Fuller-Tyszkiewicz, 2020b;Linardon et al., 2021b), but their comparative efficacy has yet to be tested. A decision was made to isolate dietary restraint in the focused program as prior multisite trials have shown that the effects of traditional CBT for bulimia nervosa are most strongly mediated by early reductions in this mechanism as opposed to the other hypothesized mechanisms (Sivyer et al., 2020;Wilson, Fairburn, Agras, Walsh, & Kraemer, 2002). Thus, there is reason to suspect that a digital intervention exclusively designed to target dietary restraint may be noninferior to a digital intervention designed to target multiple theorized change mechanisms. It was hypothesized that participants randomized to either of the two digital interventions would experience greater improvements in primary and secondary outcomes than participants randomized to the waitlist. It was also hypothesized that the focused digital intervention would not be inferior to the broad digital intervention at the post-test and follow-up periods. Design This study is a remote trial comparing three groups: a broad digital intervention, a focused digital intervention, and a waiting list. Assessments were conducted at baseline, 4-weeks postrandomization, and 8-weeks post-randomization. This trial received ethical clearance from Deakin University and was preregistered (ACTRN12621000914864). All participants provided informed consent. Study population and recruitment Participants were recruited in July-August 2021 via advertisements distributed throughout the first author's psychoeducational platform for eating disorders. This platform consists of an openaccess website (https://breakbingeeating.com/) and social media accounts. It displays passive educational content related to eating disorders, including their causes, consequences, epidemiology, and help options. This platform contains passive information about eating disorders, rather than active, multi-step self-help programs. The majority of visitors do not have access to traditional forms of care and have reported using the platform to get some form of self-help information (Linardon, Rosato, & Messer, 2020a), rendering this a suitable target population. Respondents to advertisements first completed a screening survey to determine their eligibility. Participants were eligible if they (1) were aged 18 years or over, (2) had access to the Internet and a smartphone, and (3) self-reported the presence of recurrent objective binge eating, defined as one episode per every two weeks, on average, over the past three months. Participants who met eligibility criteria then completed baseline assessments. Randomization Participants were randomized into one of three groups in a 1:1:1 ratio generated through an automated computer-based random number sequence provided in Qualtrics. Upcoming allocations were concealed from the researchers and participants as the randomization process was entirely automated. Six-hundred participants were randomized (see Fig. 1). Study conditions We implemented a user-centered design framework when developing the digital interventions. End-users were involved in the conception, design, and testing of the interventions through a Psychological Medicine series of phases. In Phase 1, the target population was surveyed to understand their receptiveness to and attitudes toward digital interventions, preferred functionality, and content delivery formats (Linardon, Shatte, Tepper, & Fuller-Tyszkiewicz, 2020c;Linardon et al., 2021c). In Phase 2, digital intervention content, functionality, and layout were developed, with its usability evaluated in a small sample of end-users (Linardon, King, Shatte, & Fuller-Tyszkiewicz, 2021a). In Phase 3, the acceptability and preliminary efficacy of the two digital interventions were tested (Linardon et al., 2021b). Broad intervention The broad program, Break Binge Eating, sought to address three hypothesized binge eating maintaining mechanisms: dietary restraint, mood dysregulation, and body image concerns. Intervention content was based on Fairburn's (2008) transdiagnostic CBT protocol. There were four modules in total, the first being psychoeducational and the remaining three dedicated toward targeting one maintaining mechanism (see Table 1 for a full description). Although participants were encouraged to stay on one module and practice its exercises for one week before moving on, the self-guided nature of this intervention meant that the participant could decide on the speed of their progression. Break Binge Eating was delivered through a smartphone app. Its content was presented via audio recordings, written text, and graphics. It took users between 30 and 60 min to go through each module, depending on how quickly the material was learnt. Alongside the main content included interactive in-built app features, such as quizzes, a digital self-monitoring diary, symptom tracking, and text boxes to complete required homework activities. One noteworthy feature was the progress monitoring feature. This feature involved an end-of-day prompt asking participants to record the number of 4582 Jake Linardon et al. binge eating episodes experienced. If a participant responded to this prompt, the app would graph the user's daily binge episodes into a bar-chart so that their progress could be visualized over the last 10 days. This symptom tracking feature was included to maintain accountability and potentially enhance motivation. Focused intervention The focused program, Breaking the Diet Cycle, sought to address one hypothesized maintaining mechanism: dietary restraint. This program was also based on established CBT protocols (Fairburn, 2008). Content was divided into four sessions. Each session taught the participant one key strategy designed to modify dietary restraint. Session one was psychoeducational in nature, while sessions two, three, and four respectively taught users skills related to real-time self-monitoring, adopting regular eating, and overcoming food anxiety. Participants were also provided guidance on how long they should remain on one session before moving onto the next session (see Table 1). However, participants had the option of going at a self-suited pace. Breaking the Diet Cycle was delivered through both a web portal and smartphone app. The web portal hosted session content, including written text, video tutorials, and graphics explaining the skills to be learnt, why they are important, and their successful implementation. In the pre-registered protocol, we stated that each session would take 30-60 min; however, participants likely completed each session in a shorter time frame given the amount of content provided. In each web session, participants were encouraged to practice the prescribed strategies via several homework exercises. These homework exercises were presented in the app component of the intervention, which allowed users to practice these skills digitally and in their daily life. For example, the app contained a digital food dairy, allowing participants to monitor their eating behaviors in real-time (as taught in session two). Importantly, the app did not contain additional content; it only helped participants practice the skills taught in the web sessions. In both groups, participants were sent reminder emails every two weeks encouraging continued program use, and guidance was provided on how long it should take for participants to progress through the program. Participants were not reimbursed. Control group Control participants were placed on a waitlist and completed the same study assessments. After completing the post-test survey, control participants were given access to intervention content. Participant characteristics At baseline, participants indicated their age, gender, ethnicity, education level, and current treatment status. Participants also self-report whether they had a current or prior eating disorder or other mental health disorder, as diagnosed by a professional (yes v. no response). Motivation to change was assessed via asking participants to rate the extent to which they are motivated to change their disordered eating habits. Confidence was also assessed via asking participants to rate the extent to which they are confident in their ability to change their disordered eating habits. Both items were assessed via a visual analog scale, ranging from 1 (not at all motivated/confident) to 10 (extremely motivated/confident). Primary outcomes The two pre-registered primary outcomes were the global score (Cronbach's α = 0.88) from the Eating Disorder Examination Questionnaire (Fairburn & Beglin, 1994) and the frequency of objective binge eating. The global score is calculated by averaging the four EDE-Q subscales, which includes 22 items rated along a 7-point scale. Objective binge eating frequency was assessed via asking participants to indicate the number of episodes experienced over the past 28 days. Secondary outcomes Secondary outcomes included the shape concern (α = 0.83), weight concern (α = 0.73), eating concern (α = 0.72), and dietary restraint (α = 0.80) subscales from EDE-Q, and items assessing the frequency of subjective binge eating and compensatory behaviors experienced over the past 28 days. Compensatory behavior frequency was operationalized as the average number of self-induced vomiting, laxative use, and driven exercise episodes experienced over the past month. General psychological distress was also assessed via the total score (α = 0.86) from the Patient Health Questionnaire-4 (Kroenke, Spitzer, Williams, & Löwe, 2009). Sample size calculation Sample size was calculated based on non-inferiority tests, as these require larger samples than for standard superiority testing. Based on a recent efficacy trial with the broad digital program used in this study (Linardon et al., 2020b), the efficacy for primary outcomes was expected to be d > 0.5, for which a non-inferiority limit of d = 0.25 was derived for powering the non-inferiority evaluation. This limit of d = 0.25 constitutes a preserved fraction of 50%, which is common in non-inferiority trials (Althunian, de Boer, Groenwold, & Klungel, 2017), and also represents a small but meaningful group difference that may be expected to be of clinical significance. Setting power at 0.80 and alpha at 0.05 (one-tailed), the required sample size per intervention arm was 198. Thus, our target sample size at baseline was 198 per group, which also ensured adequate power to test for differences between the control group and each of the intervention groups for whom effect sizes were expected to be larger than the noninferiority limit. Statistical analyses Analyses were undertaken using Stata version 16, and followed intention-to-treat principles by retaining participants in the condition they were randomized to at baseline. In these models, missing data were handled using multiple imputations with 50 imputations derived via the fully conditional specification method. Results of subsequent analyses on each imputed dataset were pooled using Rubin's (1987) rules. We also conducted sensitivity analyses using the last observation carried forward method. Findings pertaining to these sensitivity analyses are presented in online Supplementary Materials. Linear mixed models were used for hypothesis testing of outcome measures, except binge eating and compensatory behavior frequency where Poisson mixed models were used. All models included repeated measures (baseline to post-test) clustered within individuals. Comparison between the two intervention arms and control group participants were limited to baseline v. post-test time-points as control participants were given access to the intervention after post-test. Evaluations of change from postintervention to follow-up were conducted for, and compared between, the two intervention groups For continuous outcomes, effect sizes are reported as standardized mean differences, with values of 0.20 considered small, 0.50 moderate, and 0.80 and above considered large (Cohen, 1992). For count outcomes, risk ratios (RR) were instead used. RR values of 1 indicate no difference in change in outcome count scores across groups (baseline to post-test comparisons) or time (post-test to follow-up). RR values <1 indicate reduction in binge eating and compensatory behavior outcomes over time (post-test v. follow-up) or for either of the intervention groups relative to control condition (post-test differences). RR <0.60 may be considered small, RR <0.29 moderate, and RR <0.15 large (Chen, Cohen, & Chen, 2010). Table 2 presents the characteristics of participants at baseline. Most participants were White, educated, women. The three groups did not differ on any baseline variable, indicating that randomization was successful. Study attrition A total of 374 participants provided data on one of the two primary outcomes at post-test and 269 provided data on one of the two primary outcomes at follow-up. Three-hundred-fifty-nine participants provided primary outcome data on at least one assessment. The three groups differed on post-test attrition rates (χ 2 = 38.54. p < 0.001, ϕ = 0.25), with the control group (n = 47; 23%) associated with lower attrition at post-test than the broad (n = 73; 36%) and focused group (n = 106; 53%). The broad intervention group was associated with a lower attrition rate at post-test than the focused group ( p = 0.001). There was no group difference ( p = 0.056) on attrition rates at the follow-up period between the three conditions (58% for control, 48% for broad, and 58% for focused group). Drop-outs were younger (d = 0.19, p = 0.019), and reported more frequent subjective binge episodes (d = 0.17, p = 0.049) and compensatory behaviors (d = 0.19, p = 0.024). Broad intervention The uptake rate (defined as at least one login) for the broad intervention group was high, with 171 participants (85.9%) logging in at least once. Of those who accessed the intervention, 86% completed at least 50% of the content from Module 1, 66% for Module 2, 48% for Module 3, and 31% for Module 4. 59% completed at least 50% of the content within the program. The mean number of modules completed was 2.32 (S.D. = 1.43), the mean number of self-monitoring diary entries was 24.23 (S.D. = 43.97), and the mean number of days the app was used was 13.14 (S.D. = 9.95). Focused intervention One-hundred-sixty-four (82%) participants downloaded the focused program. Of those who accessed the intervention, 48% of participants completed at least 50% of program content, with a mean of 1.95 sessions (S.D. = 1.62) completed. Of those who accessed the app component (n = 134), the mean number of selfmonitoring diary entries was 16.03 (S.D. = 36.09), and the mean number of days the app was used was 7.42 (S.D. = 7.75). Group Comparisons. The two groups did not differ on uptake rates ( p = 0.336). However, when including all randomized participants (i.e. even those who did not log in to their program), compared to the focused group, the broad group was associated with higher rates of adherence (⩾50 content completed; 50% v. 39%, p = 0.027, ϕ = 0.11) and greater number of modules/sessions completed ( p = 0.018, d = 0.23). Primary outcomes Results from the intention-to-treat analyses comparing the three groups on primary outcomes are presented in Table 3. When comparing the control group with the two intervention groups, the mean differences in objective binge eating frequency and EDE-Q global scores were statistically significant. In both cases, the intervention groups reported greater reductions in primary outcomes than the control group. However, there were no differences in the degree of change on primary outcomes between the two intervention groups, with criteria for non-inferiority (difference in d < 0.25) being satisfied. Online Supplementary Fig. S1 presents a graphical representation of rate of change in primary outcomes across the study conditions. Secondary outcomes When comparing the control group with the two intervention groups, the mean differences for each secondary outcome were significant (Table 3). In all cases, the intervention groups reported greater reductions in secondary outcomes than the control group. When comparing the two intervention groups, the only significant difference to emerge was on compensatory behavior frequency, with the broad intervention group reporting greater reductions in compensatory behaviors than the focused group. No other differences in secondary outcomes were observed between the two intervention groups. Follow-up The degree of change between the two intervention groups from the post-test to follow-up period on primary and secondary outcomes is presented in Table 4. For all outcomes, initially achieved changes from baseline to post-test were sustained at follow-up for both intervention groups. However, compared to the broad group, the focused intervention group experienced significantly greater reductions from post-test to follow-up on compensatory behaviors and dietary restraint. No other between-group differences emerged at follow-up, with criteria for non-inferiority being satisfied. Discussion We conducted a randomized non-inferiority trial comparing a broad and focused self-guided digital intervention for recurrent binge eating. Both interventions produced greater reductions in eating disorder symptoms than the control group. The magnitude of effects was unexpectedly comparable to recent trials of guided digital interventions (Fitzsimmons-Craft et al., 2020) and traditional psychological treatments (Hilbert et al., 2019) for eating disorders. This is likely explained by different lengths of follow-up assessment. Whereas recent trials of guided or therapist-led treatments conducted follow-up assessments as long as 8 months postrandomization (Fitzsimmons-Craft et al., 2020), our follow-up assessment occurred at a time where rapid, large reductions in core symptoms are often observed (Linardon, Brennan, & de la Piedad Garcia, 2016). Perhaps effects diminish as follow-up length increases. We found evidence that the focused program was not inferior to the broad program on any symptom measure. It is noteworthy that no between-group differences were observed in those outcomes that were not a direct target of the focused intervention (but were in the broad intervention). Perhaps evidence of equivalence can be explained by the self-perpetuating nature of eating disorder symptoms. According to Fairburn's (2008) model of hypothesized feedback loops, extreme concerns with eating, weight and shape are both precipitants and consequences of restrictive and binge eating episodes, and engagement of disordered eating induces distress via the experience of shame and guilt. Thus, it is possible that targeting binge eating through one hypothesized mechanism may be sufficient to induce change on other symptoms implicated in this cycle. This cascade effect might also explain why we observed later improvements in compensatory behaviors in the focused program, even though these behaviors were not a direct target. Intervention effects of attrition were also examined. While attrition was high for both intervention groups, the rates reported here are consistent with the attrition rate estimated in a recent Table 3. values are based on non-imputed data; mean differences and effect sizes are derive from ITT analysis; ES, effect size; for objective and subjective binge, and compensatory behaviors the reported value is a risk ratio. For all other outcomes, effect size is a standardized mean difference. meta-analysis of fully-remote, self-guided mental health app trials (Linardon & Fuller-Tyszkiewicz, 2020). A likely explanation for high attrition observed in fully remote trials is that participants who enroll via effortless online methods come to realize that remaining in the trial requires more effort than previously thought. In contrast, trials that require researcher consultation may attract more motivated participants and better allows the researcher to explain from the outset what is expected, potentially leading to greater retention. Furthermore, attrition was lower in the waitlist, which is also consistent with findings reported in existing meta-analyses (e.g. Linardon & Fuller-Tyszkiewicz, 2020) and individuals trials (Bakker, Kazantzis, Rickwood, & Rickard, 2018) of self-guided digital interventions. A possible interpretation of this is that, unlike those allocated to an immediate intervention group, those assigned to a waitlist are required to wait until after the follow-up assessment to gain access to program content, which could be a motivating factor to remain in the trial. Alternatively, perhaps those who did not engage with the interventions felt hesitant toward completing follow-up assessments asking about their experience of the program, resulting in the higher attrition found these groups. The broad intervention group produced higher adherence and lower attrition than the focused group, suggesting that multi-step, focused programs like these may not yield the same engagement advantages observed in single session online interventions (Schleider & Weisz, 2017a). Trials of single-session interventions (which are also highly focused in nature) have produced rates of retention as high as 75% (Schleider et al., 2021), which is substantially greater than what was observed from our focused intervention. Perhaps the ability to complete the program in one sitting rather than focusing on one change mechanism is what affords single session interventions an engagement advantage over singletarget interventions. Conversely, it is not fully understood why retention and adherence were higher for the broad group over the focused group. Perhaps the delivery of diverse program content accompanied by a large suite of different therapeutic techniques is better at enhancing user engagement. For example, someone allocated to a focused intervention might quickly disengage after not being receptive to the limited number of skills that are the key focus of the program, but this same person might persist with a broader program knowing that several other preferred techniques will be presented. Alternatively, it could be that the different device delivery formats between the two groups accounted for these effects. That is, accessing both a web and app platform may have presented an additional problem with usability for those allocated to the focused program, potentially explaining the lower engagement rates. There are important limitations to this study. First, as the follow-up assessment was conducted 8-weeks post-randomization, the longer-term effects of these digital intervention formats are unknown. It is possible that the benefits observed from focused interventions diminish to a greater extent over longer follow-up periods. Examining the relative, long-term efficacy of focused and broad digital interventions is an important future direction. Second, differential attrition and adherence rates between the two intervention groups may have in part been explained by the different digital delivery modes. Apps may hold distinct advantages over web programs because they (i) are always within arm's reach, (ii) enable users to perform and record exercises in their natural environment, and (iii) are thought to facilitate faster skill acquisition and utilization because they can be engaged with in different contexts (Bakker, Kazantzis, Rickwood, & Rickard, 2016). Although available trials directly comparing web and app programs have failed to identify key outcome differences (Stolz et al., 2018), we cannot rule out the possibility that observed differences found were in part attributable to different device delivery modes. Similarly, one of the exercises (forbidden food exposure) targeting dietary restraint was only presented in the focused program (all other exercises targeting restraint were the same between the two programs), potentially accounting for some of the observed effects. However, this exposure exercise was presented in the last session of the focused program (see Table 1), and considering that around 75% of participants dropped out prior to accessing this session, this difference between the two programs likely had a negligible impact on study findings. Third, attrition was high. Although simulation studies indicate that multiple imputations provide unbiased parameter estimates even in the presence of large amounts of missing data (Madley-Dowd, Hughes, Tilling, & Heron, 2019), readers must take into account the amount of missing data when interpreting these findings. We note that re-running group difference tests under the assumption that people dropped out due to lack of symptom improvement led to predictable dampening of effect sizes, but all effects remained significant. Thus, we have some confidence in the robustness of the presented findings, but caution the true treatment effects may be somewhere between the conservative estimates in our re-analysis and those presented in-text. Fourth, generalizability of findings is limited to White, welleducated, younger women. Attempts to recruit participants from other racial, gender, and socioeconomic groups are needed to better understand the role of digital interventions in different populations. Likewise, due to the self-reported nature of assessments, data on participant body mass were not collected. Body mass index may have moderated intervention effects, as has been shown previously (Vall & Wade, 2015), suggesting that consideration of this variable in future trials is necessary. Present findings highlight the viability and clinical utility of both broad and focused formats of digital intervention for bingespectrum eating disorders. Although digital interventions are not designed to replace traditional psychological treatment or completely resolve the existing service gap, we show that brief, low intensity, scalable online programs with different degrees of focus may be palatable options for many, including those who are either not interested in or cannot access traditional treatment approaches. We also show that focused programs designed to target one central change mechanism may be sufficient to induce meaningful change in other key eating disorder symptoms. A next step in research is to identify individual characteristics predictive of responsiveness to different digital intervention formats so that we can personalize the delivery of different intervention options for people with eating disorders. Conflict of interest. The authors declare no conflict of interests.
2022-05-28T06:22:56.850Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "ba04e604fe6828527e10d8364be5fb10c018a3fe", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7C5D5F3794FB9C9FD8B2CE6E1EB2EE0A/S0033291722001477a.pdf/div-class-title-a-broad-span-class-italic-v-span-focused-digital-intervention-for-recurrent-binge-eating-a-randomized-controlled-non-inferiority-trial-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "f45674f98608628dfb07513c6fb86a3f2e5a7013", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
10537816
pes2o/s2orc
v3-fos-license
Prediction of the Vickers Microhardness and Ultimate Tensile Strength of AA5754 H111 Friction Stir Welding Butt Joints Using Artificial Neural Network A simulation model was developed for the monitoring, controlling and optimization of the Friction Stir Welding (FSW) process. This approach, using the FSW technique, allows identifying the correlation between the process parameters (input variable) and the mechanical properties (output responses) of the welded AA5754 H111 aluminum plates. The optimization of technological parameters is a basic requirement for increasing the seam quality, since it promotes a stable and defect-free process. Both the tool rotation and the travel speed, the position of the samples extracted from the weld bead and the thermal data, detected with thermographic techniques for on-line control of the joints, were varied to build the experimental plans. The quality of joints was evaluated through destructive and non-destructive tests (visual tests, macro graphic analysis, tensile tests, indentation Vickers hardness tests and t thermographic controls). The simulation model was based on the adoption of the Artificial Neural Networks (ANNs) characterized by back-propagation learning algorithm with different types of architecture, which were able to predict with good reliability the FSW process parameters for the welding of the AA5754 H111 aluminum plates in Butt-Joint configuration. Introduction The process of Friction Stir Welding (FSW) is a solid-state welding method based on frictional and stirring phenomena, which was discovered and patented by the Welding Institute of Cambridge in 1991 and documented in the literature by Thomas [1], Nandau et al. [2], and Rodrigues et al. [3]. In this process, welding heat is produced by a rotating non-consumable tool which plunges into the work piece and moves forward. Therefore, the welding is possible thanks to the action of a tool that generates heat by friction between its shoulder and the base material, giving rise to plastic deformation with its pin. Significant advantages can be obtained when it is compared with fusion joining processes for aluminum due to a very low welding temperature: mechanical distortion is practically eliminated, with minimal Heat Affected Zone (HAZ), and there is an excellent surface finish [2]. Additionally, FSW yields no crack formation and porosity right after bonding because of the low input of total heat. The technology relies on the use of a particular tool, as shown in Figure 1, which provides both the necessary heat to the plasticization of the material and the useful motion to scramble the plasticized material and to generate the junction. In the literature, there are numerous contributions regarding the application of this process, which is used to weld successfully low-melting temperature alloys, steel plates with considerable thickness and dissimilar materials, which are hardly welded with fusion welding processes. Since the FSW process has been discovered, the demand of assembled lightweight metal structures increased. FSW has been widely used in aerospace, aeronautics, marine sector, fuel tanks and saving food industry for about a decade. It is used to bond aluminum alloys as well as Cu alloys, Ti alloys and steel [4,5]. In particular, Al alloys, thanks to their high strength-to-weight ratio, low density, forming properties, low costs and recyclability, are the most widely employed materials in many automotive, marine and aerospace applications. In this case, the FSW process avoids the tendency of showing cracks and porosity of the aluminum alloy, which frequently occurs after fusion welding. An aluminum alloy of large aeronautical and automotive interest is the AA5754 H111; however, the FSW process of non-heat-treatable aluminum-magnesium (Al-Mg) alloys (5xxx series) is substantially less explored in the literature. The mechanical properties of the welds produced from Aluminum 5xxx alloys depend mainly on the grain size and the dislocation density, due to the phenomena of plastic deformation and recrystallization, occurring during the FSW process, as shown by Senkara et al. [6] and by Miles et al. [7]. Jin et al. [8] studied the FSW of AA5754 using constant parameters and they have examined the microstructural development and micro hardness distribution in the welds. In the work of Kulekci et al. [9], the effects of the tool pin diameter and the tool rotation speed at a constant travel speed were investigated on fatigue properties of friction stir overlap welded AA5754. Other papers [10,11] provide information on the influence of process parameters on the tensile and on the fatigue behavior of a Friction Stir Welded joint under a single FSW parameter in a tailor-welded blank of AA5754. Nevertheless, these studies were not explicit with regard to the process parameters that were employed. Furthermore, as part of the process control techniques, they cannot provide information about the performance of process during welding and require lengthy testing times, making them feasible for the industrial field. Other authors suggested Infrared Thermography (IRT) to study the thermal behavior of welded joints. Even in this case, there are only a few studies on thermal monitoring. Between these, there are some studies of Serio et al. [12][13][14] that demonstrate how the absolute temperature is affected by environmental conditions and it is influenced by experimental set-up adopted for the tests. Subsequently, it cannot be used as a representative parameter of the FSW process. In particular, a more sensitive thermal parameter is proposed for the monitoring of the FSW process, representing the heat generated during the process. This research has identified a process parameters window suitable for obtaining good quality joints on AA5754 H111. It was shown that thermographic techniques can be effective instruments of control for the FSW process. In the literature, there are numerous contributions regarding the application of this process, which is used to weld successfully low-melting temperature alloys, steel plates with considerable thickness and dissimilar materials, which are hardly welded with fusion welding processes. Since the FSW process has been discovered, the demand of assembled lightweight metal structures increased. FSW has been widely used in aerospace, aeronautics, marine sector, fuel tanks and saving food industry for about a decade. It is used to bond aluminum alloys as well as Cu alloys, Ti alloys and steel [4,5]. In particular, Al alloys, thanks to their high strength-to-weight ratio, low density, forming properties, low costs and recyclability, are the most widely employed materials in many automotive, marine and aerospace applications. In this case, the FSW process avoids the tendency of showing cracks and porosity of the aluminum alloy, which frequently occurs after fusion welding. An aluminum alloy of large aeronautical and automotive interest is the AA5754 H111; however, the FSW process of non-heat-treatable aluminum-magnesium (Al-Mg) alloys (5xxx series) is substantially less explored in the literature. The mechanical properties of the welds produced from Aluminum 5xxx alloys depend mainly on the grain size and the dislocation density, due to the phenomena of plastic deformation and recrystallization, occurring during the FSW process, as shown by Senkara et al. [6] and by Miles et al. [7]. Jin et al. [8] studied the FSW of AA5754 using constant parameters and they have examined the microstructural development and micro hardness distribution in the welds. In the work of Kulekci et al. [9], the effects of the tool pin diameter and the tool rotation speed at a constant travel speed were investigated on fatigue properties of friction stir overlap welded AA5754. Other papers [10,11] provide information on the influence of process parameters on the tensile and on the fatigue behavior of a Friction Stir Welded joint under a single FSW parameter in a tailor-welded blank of AA5754. Nevertheless, these studies were not explicit with regard to the process parameters that were employed. Furthermore, as part of the process control techniques, they cannot provide information about the performance of process during welding and require lengthy testing times, making them feasible for the industrial field. Other authors suggested Infrared Thermography (IRT) to study the thermal behavior of welded joints. Even in this case, there are only a few studies on thermal monitoring. Between these, there are some studies of Serio et al. [12][13][14] that demonstrate how the absolute temperature is affected by environmental conditions and it is influenced by experimental set-up adopted for the tests. Subsequently, it cannot be used as a representative parameter of the FSW process. In particular, a more sensitive thermal parameter is proposed for the monitoring of the FSW process, representing the heat generated during the process. This research has identified a process parameters window suitable for obtaining good quality joints on AA5754 H111. It was shown that thermographic techniques can be effective instruments of control for the FSW process. In this context, the control and optimization of manufacturing processes have prompted the interest of many researchers towards the study of new technological tools, since it represents a critical issue for the production engineering. In the case of the FSW process, in addition to the use of thermographic techniques, to better understand the process-related dynamics and to control all significant variables it is necessary more than experimental trials. Therefore, it requires the integration of informative technology for enhancing the quality of manufacturing systems. The implementation of numerical and analytical models can reduce time and cost for experiment and analysis through quantitative solutions. A model based on the adoption of one or more Artificial Neural Networks (ANNs) can help to identify the relation between process parameters and quality of weld. In particular, according to Facchini et al. [15], one of the main advantages of this technique is that it can produce good results, even when supplied data are noisy or incomplete. In these cases, ANN can predict the output parameters after learning from a training data set, where the learning algorithm determines the numeric weights to the link among neurons that produce a robust and correct output. Recently it is spreading the use of ANN to model various problems in many fields such as materials science and the engineering [16][17][18][19][20][21][22]. ANNs are inspired by natural neural networks so they are systems able to process information and simulate the behavior of the brains mechanism. In summary, the main advantages of a neural network are the ability to implicitly detect complex nonlinear relationships between dependent and independent variables; and the ability to detect all possible interactions between predictor variables and availability of multiple training algorithms. Neural Networks software packages are very common among scientists and manufacturing researchers. In particular, their applications in the field of welding have showed good success. In scientific literature, an ANN was adopted in order to predict the mechanical properties of butt welding [23]. Yilmaz et al. [24] developed a generalized regression neural network model that allows predicting the tensile strength for steel wires and it was very useful to note that the predicted and the experimental values were very similar. Ates et al. [25] introduced a new technique based on ANNs for the prediction of gas metal arc welding parameters. Input parameters of the model consisted of gas mixtures, whereas the output response of the ANN model included several mechanical properties, i.e., tensile strength, elongation and weld metal hardness. ANN controller was trained with the extended delta-bar-delta learning algorithm. The results showed that the calculated results were coherent with the measured data. As far as concern the FSW process, in scientific literature, there are only few papers that discuss the modeling of this welding process by a neural network [23,[26][27][28][29]. In particular, the most interesting works are those of Shojaeefard et al. [30] based on the adoption of the neural network trained with Particle Swarm Optimization (PSO) for the modeling and the forecast of the mechanical properties of the friction stir welding butt joints in AA7075/AA5083. Asadi et al. [31], adopting the ANN, identified a relationship between the grain size and the hardness of nanocomposites in FSW process. Concerning the adoption of ANN to the FSW process in case of welding of 5xxx aluminum alloys, the available scientific works are very few. This type of aluminum alloy is widely used in the marine, automotive and aviation fields, thanks to its high resistance to corrosion. In this paper, the design of the ANN was adopted in order to develop a suitable simulation model for predicting, monitoring and controlling the mechanical properties of welded aluminum alloy plates on the basis of FSW process parameters. The data set adopted for training, testing and validation of the ANN, were the results obtained by experimental cases. The remaining part of this paper is organized as follows: Section 2 presents the experimental procedures; Section 3 presents the simulation model implementation; and Section 4 presents the results analysis and conclusions. Data Analysis The present work uses data from previous studies carried out by the FSW process on the alloy AA5754 H111 [12,14,32], in which a qualitative analysis of welded joints with non-destructive (visual inspection) and destructive testing (macrographic tests) was performed in order to detect macro defects present on the surface and within the welded area. The results of the tensile tests of every welded specimens, in terms of ultimate tensile strength (UTS) and Vickers micro hardness, were considered to perform a quantitative analysis of process. Thermal behavior of FSW process was studied through thermographic technique. In particular, two thermal parameters were considered: the maximum temperature and the slope of the heating curve measured during the FSW process (MSHC RS and MSHC AS , respectively). The results obtained showed that the data derived from the thermographic controls can be linked to the quality of welded joints, in terms of UTS. These studies underline how the use of infrared technology for monitoring the FSW process in a quantitative manner, giving important data on the thermal behavior of joints during the process. Finally, the quality of welded joints, evaluated in terms of UTS and micro hardness, was directly connected to the thermal parameters with the use of the ANN, which is the focus of this work. The model tracked with the use of the neural networks allows predicting quantitatively the mechanical behavior of the FSW joints, as shown in Figure 2. Data Analysis The present work uses data from previous studies carried out by the FSW process on the alloy AA5754 H111 [12,14,32], in which a qualitative analysis of welded joints with non-destructive (visual inspection) and destructive testing (macrographic tests) was performed in order to detect macro defects present on the surface and within the welded area. The results of the tensile tests of every welded specimens, in terms of ultimate tensile strength (UTS) and Vickers micro hardness, were considered to perform a quantitative analysis of process. Thermal behavior of FSW process was studied through thermographic technique. In particular, two thermal parameters were considered: the maximum temperature and the slope of the heating curve measured during the FSW process (MSHCRS and MSHCAS, respectively). The results obtained showed that the data derived from the thermographic controls can be linked to the quality of welded joints, in terms of UTS. These studies underline how the use of infrared technology for monitoring the FSW process in a quantitative manner, giving important data on the thermal behavior of joints during the process. Finally, the quality of welded joints, evaluated in terms of UTS and micro hardness, was directly connected to the thermal parameters with the use of the ANN, which is the focus of this work. The model tracked with the use of the neural networks allows predicting quantitatively the mechanical behavior of the FSW joints, as shown in Figure 2. Collecting the Experimental Data The used data refer to the experiments on two hot rolled AA5754 H111 plates (6 mm × 200 mm × 200 mm) which were welded with the FSW Process in butt configuration. The AA5754 H111 is not a heat-treatable aluminum-magnesium alloy with nominal Mg content in the range of 2.6%-3.6%. It is supplied in a state of tempers which gives maximum formability to the material. This aluminum alloy is characterized by excellent resistance to corrosion in a marine environment, it has high formability and it is weld able by fusion. Due to its characteristics, it is used to make pressure Collecting the Experimental Data The used data refer to the experiments on two hot rolled AA5754 H111 plates (6 mm × 200 mm × 200 mm) which were welded with the FSW Process in butt configuration. The AA5754 H111 is not a heat-treatable aluminum-magnesium alloy with nominal Mg content in the range of 2.6%-3.6%. It is supplied in a state of tempers which gives maximum formability to the material. This aluminum alloy is characterized by excellent resistance to corrosion in a marine environment, it has high formability and it is weld able by fusion. Due to its characteristics, it is used to make pressure vessels, components of trucks and tankers, details of chemical plants and in shipbuilding. The nominal composition of AA5754 H111, as registered with the Aluminum Association [33] and physical properties of the material are shown in Tables 1 and 2, respectively. All the welding tests were carried out in position control, on a Friction Stir Welding Machine-model: LEGIO™ FSW 4UT (for 2-D welding: Y-axis range 400 mm; X-axis range 1000 mm and Z-axis 300 mm), provided by ESAB AB Welding Automation and placed in the TISMA Laboratory (Innovative Techniques for Welding of Advanced Materials, http://www.poliba.it/it/TISMA) of the Department of Mechanical Engineering, Mathematics and Management of the Polytechnic of Bari-as shown in Figure 3. The workpiece was fixed on a rigid backing-plate and clamped along the welding direction on both sides to avoid lateral movement during welding. The terminal part of the workpiece was positioned on the worktables as shown in Figure 4. The welding direction was parallel to the rolling direction and the dwell time, which is the period needed to preheated the material and to achieve a sufficient temperature ahead of the tool to allow the traverse, was always kept at 15 s, whereupon the tool moved with constant travel speed according to the parameters combination selected and described below. During the penetration phase, the rotating tool pin penetrates into the workpiece until the tool shoulder makes contact. The penetration speed was about 0.5 cm/min. The diameter of the shoulder was 22 mm wide and the tool was inclined at 1.2° with respect to the workpiece to facilitate mixing of the material. The welding was carried out using the following values of the tool rotation speed and travel speed, which were, respectively, 500-700 rpm and 20-30 cm/min. Using these welding parameters, different samples were welded for destructive and non-destructive testing in order to detect macro defects placed on surface and within the welded area. The workpiece was fixed on a rigid backing-plate and clamped along the welding direction on both sides to avoid lateral movement during welding. The terminal part of the workpiece was positioned on the worktables as shown in Figure 4. The welding direction was parallel to the rolling direction and the dwell time, which is the period needed to preheated the material and to achieve a sufficient temperature ahead of the tool to allow the traverse, was always kept at 15 s, whereupon the tool moved with constant travel speed according to the parameters combination selected and described below. During the penetration phase, the rotating tool pin penetrates into the workpiece until the tool shoulder makes contact. The penetration speed was about 0.5 cm/min. The diameter of the shoulder was 22 mm wide and the tool was inclined at 1.2 • with respect to the workpiece to facilitate mixing of the material. The welding was carried out using the following values of the tool rotation speed and travel speed, which were, respectively, 500-700 rpm and 20-30 cm/min. Using these welding parameters, different samples were welded for destructive and non-destructive testing in order to detect macro defects placed on surface and within the welded area. Visual inspections and metallographic tests were the first performed examinations. They were carried out preparing the cross-sectional samples taken from all welded joints. Specimens were prepared using standard metallographic methods for macroscopic examinations of the weld zones. Cross sections of the welds were cold mounted, polished and etched with a solution consisted of 5 mL of distilled water and 120 mL of hydrochloric acid for 90 s. After these treatments, the bead appearance was observed by capturing the images detected from the cross sections to identify large and very small internal flaws. The surface appearance of FSW showed a regular series of partially circular ripples. These ripples were essentially cycloidal and were produced by the circumferential edge of the shoulder during traverse. Many tests have showed continuous flash but with a marked ripple. This has demonstrated the significant ductility of the material and that the plastic deformation suffered by the material changes periodically over time. Visually the welds carried out with the greater rotation speed were unwrapped, because they have a macro voids in the section, denominated "tunnel". These defects were caused by the use of incorrect process parameters, which provided wrong heat input. It produced a wrong mixing action of the material and it created voids in the section and on the surface of the welding. Macrographic analyses were carried out to detect internal flaws of the welds. Almost all of the analyses revealed a good mixing and a good penetration of the tool in the joints, except for the joints section realized using the highest rotation speed (n = 700 rpm) where defects were observed such as cavity, due to non-appropriate contribution of heat input and stirring rate. All macrographs presented a nugget shape, with a relevant elongated form and the typical "onion rings", identifying by the mixing zone characteristic of FSW process, were very visible. The combination of speeds, 500 rpm and 20 cm/min, exhibits a high quality with no significant defects. Transverse tensile tests, at room temperature, have done to evaluate the mechanical properties of welded joints. All tests were performed on a MTS servo hydraulic machine (Model 370, MTS, Eden Prairie, MN, USA), under displacement control with a constant crosshead speed displacement rate of 5 mm/min according to Standard UNI EN ISO 6892-1:2009 [34]. Specimens were cut distant to the start position and the exit the tool because, around these positions, the weld process was not stationary. In particular, they were machined and prepared according to standards UNI EN ISO 6892-1:2009 and UNI EN ISO 25239:2011 [34,35] and they were obtained in orthogonal direction with respect to the rolling direction. The gauge section of specimens was located within the welded zone and the geometrical dimensions chosen were 12 mm width and 200 mm length for a gauge length of 50 mm. The results of the tensile tests showed that Visual inspections and metallographic tests were the first performed examinations. They were carried out preparing the cross-sectional samples taken from all welded joints. Specimens were prepared using standard metallographic methods for macroscopic examinations of the weld zones. Cross sections of the welds were cold mounted, polished and etched with a solution consisted of 5 mL of distilled water and 120 mL of hydrochloric acid for 90 s. After these treatments, the bead appearance was observed by capturing the images detected from the cross sections to identify large and very small internal flaws. The surface appearance of FSW showed a regular series of partially circular ripples. These ripples were essentially cycloidal and were produced by the circumferential edge of the shoulder during traverse. Many tests have showed continuous flash but with a marked ripple. This has demonstrated the significant ductility of the material and that the plastic deformation suffered by the material changes periodically over time. Visually the welds carried out with the greater rotation speed were unwrapped, because they have a macro voids in the section, denominated "tunnel". These defects were caused by the use of incorrect process parameters, which provided wrong heat input. It produced a wrong mixing action of the material and it created voids in the section and on the surface of the welding. Macrographic analyses were carried out to detect internal flaws of the welds. Almost all of the analyses revealed a good mixing and a good penetration of the tool in the joints, except for the joints section realized using the highest rotation speed (n = 700 rpm) where defects were observed such as cavity, due to non-appropriate contribution of heat input and stirring rate. All macrographs presented a nugget shape, with a relevant elongated form and the typical "onion rings", identifying by the mixing zone characteristic of FSW process, were very visible. The combination of speeds, 500 rpm and 20 cm/min, exhibits a high quality with no significant defects. Transverse tensile tests, at room temperature, have done to evaluate the mechanical properties of welded joints. All tests were performed on a MTS servo hydraulic machine (Model 370, MTS, Eden Prairie, MN, USA), under displacement control with a constant crosshead speed displacement rate of 5 mm/min according to Standard UNI EN ISO 6892-1:2009 [34]. Specimens were cut distant to the start position and the exit the tool because, around these positions, the weld process was not stationary. In particular, they were machined and prepared according to standards UNI EN ISO 6892-1:2009 and UNI EN ISO 25239:2011 [34,35] and they were obtained in orthogonal direction with respect to the rolling direction. The gauge section of specimens was located within the welded zone and the geometrical dimensions chosen were 12 mm width and 200 mm length for a gauge length of 50 mm. The results of the tensile tests showed that the maximum values of UTS were achieved for a tool rotation speed of 500 rpm, and a travel speed of 20 cm/min. Documented statistical analysis [14,32] point out the dependence of the mechanical characteristics of the AA5754 H111 FSW joints in terms of UTS from the tool rotation speed and the position along the welding direction. Micro hardness tests were conducted considering changes in the microstructure of the FSW joints which is the result of the welding process. The FSW process was asymmetric and the thermo-mechanical action, due to movement of the tool with the surface of the workpiece, creates a microstructure evolution in the welded zone. Therefore, moving the FSW joint outward, the following areas were observed, as shown in Figure 5: the nugget zone, which is the part invested by the pin, the TMAZ (Thermo-Mechanically Affected Zone) and, finally, the HAZ (Heat Affected Zone) and the base material. The geometry of the nugget can be changed by varying the speed of rotation of the tool. the maximum values of UTS were achieved for a tool rotation speed of 500 rpm, and a travel speed of 20 cm/min. Documented statistical analysis [14,32] point out the dependence of the mechanical characteristics of the AA5754 H111 FSW joints in terms of UTS from the tool rotation speed and the position along the welding direction. Micro hardness tests were conducted considering changes in the microstructure of the FSW joints which is the result of the welding process. The FSW process was asymmetric and the thermo-mechanical action, due to movement of the tool with the surface of the workpiece, creates a microstructure evolution in the welded zone. Therefore, moving the FSW joint outward, the following areas were observed, as shown in Figure 5: the nugget zone, which is the part invested by the pin, the TMAZ (Thermo-Mechanically Affected Zone) and, finally, the HAZ (Heat Affected Zone) and the base material. The geometry of the nugget can be changed by varying the speed of rotation of the tool. The effects of the FSW on the hardness distribution were fully analyzed. It was observed that the weld samples with the same travel speed had similar profiles. From the results, no HAZ softening was found which was expected because the tempered H111 is almost equivalent to the O-temper. In almost all of tests, the highest value of micro hardness in the stirred zone was found not in the middle of the joint but shifted towards one of the sides of the joint, where higher plastic strain was observed and the micro hardness curve shows a W-shape. The average hardness, in the nugget and in the base material, was similar among all the samples. The hardness profile greatly depends on the precipitate distribution and only slightly on the grain and dislocation structure [36]. Therefore, the evolution of the precipitate distribution, with the experienced temperature peak and with the stain introduced during the welding, produces the observed hardness variation. The analysis of all samples, conducted with different process parameters, show different values of the mechanical properties; the ultimate tensile strength (UTS) and the HAZ micro hardness of all the FSW joints were used to perform a quantitative analysis of process and to describe the influence of process parameters on the quality of joints (Table 3). In addition, the thermal behavior of FSW process was investigated through thermographic technique. A detailed discussion about these results is clearly present in the work of Serio et al. [13]. In particular, for each test, the slope of the heating curve measured during the FSW process was evaluated. This parameter allowed evaluating some important thermal characteristics of the joints, such as the percentage of heat induced by the joint and its relative materials speed of heating. The control of the slope of the heating curve has allowed us to monitor the FSW process in quantitative way revealing the thermal behavior of joints during the process. The main considerations about the thermal behavior of joints can be summed up as follows: The effects of the FSW on the hardness distribution were fully analyzed. It was observed that the weld samples with the same travel speed had similar profiles. From the results, no HAZ softening was found which was expected because the tempered H111 is almost equivalent to the O-temper. In almost all of tests, the highest value of micro hardness in the stirred zone was found not in the middle of the joint but shifted towards one of the sides of the joint, where higher plastic strain was observed and the micro hardness curve shows a W-shape. The average hardness, in the nugget and in the base material, was similar among all the samples. The hardness profile greatly depends on the precipitate distribution and only slightly on the grain and dislocation structure [36]. Therefore, the evolution of the precipitate distribution, with the experienced temperature peak and with the stain introduced during the welding, produces the observed hardness variation. The analysis of all samples, conducted with different process parameters, show different values of the mechanical properties; the ultimate tensile strength (UTS) and the HAZ micro hardness of all the FSW joints were used to perform a quantitative analysis of process and to describe the influence of process parameters on the quality of joints (Table 3). In addition, the thermal behavior of FSW process was investigated through thermographic technique. A detailed discussion about these results is clearly present in the work of Serio et al. [13]. In particular, for each test, the slope of the heating curve measured during the FSW process was evaluated. This parameter allowed evaluating some important thermal characteristics of the joints, such as the percentage of heat induced by the joint and its relative materials speed of heating. The control of the slope of the heating curve has allowed us to monitor the FSW process in quantitative way revealing the thermal behavior of joints during the process. The main considerations about the thermal behavior of joints can be summed up as follows: • The higher temperatures were measured along the retreating side for all tests. • The maximum temperature reached during the process, pixel by pixel, can be used to monitor the stationary nature of the process. • The Maximum Slope of Heating Curve (MSHC) of thermal profiles evaluated on the surface of joints can be used for monitoring the process parameters. This parameter is more sensitive than the maximum temperature as it is directly correlated with the energy and then the heat supplied during the welding process. In addition, in this case, statistical analyses were carried out in order to verify the statistical significance of the effect produced by process parameters on MSHC [14]. The analysis of variance ANOVA showed that the parameter MSHC was influenced by the tool rotation speed, the travel speed and position. The results of all tests are summarized in the Table 3 and the same data were used to train the ANN. Design and Training of the ANNs The simulation model was developed in order to establish a relationship between the mechanical properties of the joints and the technical parameters of the FSW process. Two different ANNs were adopted; the first network (ANN HV ) was used for identify the Vickers micro hardness of HAZ on the bases of five different input parameters (n, v, p, MSHC RS , and MSHC AS ); the second network (ANN UTS ) considers the Ultimate Tensile Strength. The inputs of the ANN UTS were 6-component vectors, five of them were the same of the first network (n, v, p, MSHC RS , and MSHC AS ), the sixth input parameter, was HV haz (the value estimated by first network). The ANNs were implemented using Alyuda NeuroIntelligence™-Neural networks software (2.2, Alyuda Research Company, LLC., Cupertino, CA, USA). At the beginning, all data are preprocessed to simply convert the input data into a new version for three reasons [37]: (i) to ensure the size of data reflect the importance level in determining the output; (ii) to facilitate the random initialization of weights before training the networks; and (iii) to normalized all data to avoid different measurement due to different unit of input. The configurations adopted for learning process are as following: logistic function was selected as activations function for all neurons, the learning and momentum rates was fixed at 0.01, and the training process stopped when the model's mean squared error reduces by less than 1 × 10 −6 or the model completes 2.5 × 10 5 iterations, whichever condition occurs first. In order to identify the best architecture, all training algorithms supported by Alyuda NeuroIntelligence™ were applied to the analysis of the data. . It is convenient to note that R 2 is equal to r in linear regression analyses, but that is not necessarily the case in ANN [38]. In most cases, the optimal R 2 and r was obtained for batch back propagation (BBP) learning algorithm (Table 4), therefore both ANNs were trained with BBP training algorithm based on a recursive procedure that estimates the weights according to the response of each layer errors [36,39]. The synaptic weights were updated using the method of gradient descent to minimize error (which evaluates the difference between the expected value (target) and real value of the measure (see Equation (1))). According to the neural network routine, the transformation of the input vector into output vector was identified as function f (see Equation (2)), where "wih ij " value identifies the weight attributed to the connection from node i of the input layer to the node j of the hidden layer, while "who ij " represents the value of the weight which underlined the link between hidden land output layer: The learning process was characterized by iterative system finalized to the identification of the optimal combination of the matrix w*, through the minimization of E(w) function (see Equation (3)). where x is the result produced by the matrix of weights w*, and d is the "desired" product. ANN HV Prediction Model The first network ANN HV was developed with five input nodes (n, v, p, MSHC RS , and MSHC AS ) and only one response node (output node), identified as the micro hardness of the Heat Affected Zone of the welds, HV haz . The data sets are displayed in Table 3 and all outputs were normalized in the range 0-1 to improve the stability of the neural network. In this case, in order to reduce the overfitting phenomenon and avoid a excessive compute time, the available data set (made up by small sample size) was partitioned into three subsets adopting the Leave-one-out cross-validation (LOOCV) procedure; consequently, three different subsets were identified. • Training set: The group of data constituted by a sample of 75% of total data for training the ANN. The synaptic weights were, in this phase, repeatedly updated in order to reduce the error between experimental outputs and respective targets; • set: This group of data includes a sample of 12.5% of total data, given to the network during the learning phase, in this one the error was evaluated in order to update the threshold values and the synaptic weights; • set: This group of data includes a sample of 12.5% of total data. This phase consists of identifying the underlying trend of the training data subset, avoiding the overfitting phenomenon. In the case of the error measured, compared to validation subset, begins to increase, the training was stopped. This procedure runs together with the training procedure. To identify the "best" architecture of the network, it was adopted a "trial and error approach": that it considered more of 1000 different networks architectures. The network fitness score, based on the inverse of the mean absolute error (MAE) on the testing set, was computed by software for every network with different design (number of hidden layers, number of nodes, etc.). A higher fitness score allowed identifying the better network architecture. Consistently, for each configuration, the minimum prediction error was evaluated and the best accuracy was achieved adopting an ANN HV characterized by only one hidden layer with 12 neurons as shown in Figures 6 and 7. ANNHV Prediction Model The first network ANNHV was developed with five input nodes (n, v, p, MSHCRS, and MSHCAS) and only one response node (output node), identified as the micro hardness of the Heat Affected Zone of the welds, HVhaz. The data sets are displayed in Table 3 and all outputs were normalized in the range 0-1 to improve the stability of the neural network. In this case, in order to reduce the overfitting phenomenon and avoid a excessive compute time, the available data set (made up by small sample size) was partitioned into three subsets adopting the Leave-one-out cross-validation (LOOCV) procedure; consequently, three different subsets were identified.  Training set: The group of data constituted by a sample of 75% of total data for training the ANN. The synaptic weights were, in this phase, repeatedly updated in order to reduce the error between experimental outputs and respective targets;  Testing set: This group of data includes a sample of 12.5% of total data, given to the network during the learning phase, in this one the error was evaluated in order to update the threshold values and the synaptic weights;  Validation set: This group of data includes a sample of 12.5% of total data. This phase consists of identifying the underlying trend of the training data subset, avoiding the overfitting phenomenon. In the case of the error measured, compared to validation subset, begins to increase, the training was stopped. This procedure runs together with the training procedure. To identify the "best" architecture of the network, it was adopted a "trial and error approach": that it considered more of 1000 different networks architectures. The network fitness score, based on the inverse of the mean absolute error (MAE) on the testing set, was computed by software for every network with different design (number of hidden layers, number of nodes, etc.). A higher fitness score allowed identifying the better network architecture. Consistently, for each configuration, the minimum prediction error was evaluated and the best accuracy was achieved adopting an ANNHV characterized by only one hidden layer with 12 neurons as shown in Figures 6 and 7. ANNUTS Prediction Model The second network (ANNUTS) was developed with six input nodes (n, v, p, MSHCRS, MSHCAS, and HVhaz) and one response node (output) identified as the Ultimate Tensile Strength of the welds (UTS). In this case, the criteria adopted for dataset partition, the samples size of the three subsets (training, testing, and validation) and the methodology chosen for identifying the architecture of the network are the same of the ANNHV (see Section 3.2). The "best" reliability of the prediction was achieved, adopting an ANNUTS characterized by only one hidden layer with four neurons, as shown in Figures 8 and 9. ANN UTS Prediction Model The second network (ANN UTS ) was developed with six input nodes (n, v, p, MSHC RS , MSHC AS , and HV haz ) and one response node (output) identified as the Ultimate Tensile Strength of the welds (UTS). In this case, the criteria adopted for dataset partition, the samples size of the three subsets (training, testing, and validation) and the methodology chosen for identifying the architecture of the network are the same of the ANNHV (see Section 3.2). The "best" reliability of the prediction was achieved, adopting an ANN UTS characterized by only one hidden layer with four neurons, as shown in Figures 8 and 9. Experimental Results Figures 10 and 11 illustrate the characteristics of the surface and in section of the welded sample, which showed the best results in mechanical terms. The pair of process parameters, n = 500 rpm and v = 20 cm/min, led to perform welding with high mechanical characteristics and high quality with no significant defects both on the surface and in the section. Experimental Results Figures 10 and 11 illustrate the characteristics of the surface and in section of the welded sample, which showed the best results in mechanical terms. The pair of process parameters, n = 500 rpm and v = 20 cm/min, led to perform welding with high mechanical characteristics and high quality with no significant defects both on the surface and in the section. Moreover, based on the reference weld pitch (WP = 0.35 mm), which indicated the ratio between the travel speed v and the tool rotation speed n [2], the sample welded with n = 500 rpm and v = 20 cm/min was the one nearest to the optimal ratio (WP = 0.40 mm). In almost all of the tests, the highest value of micro hardness in the stirred zone was found not in the middle of the joint, but shifted towards one of the sides of the joint, where higher plastic strain was observed and the micro hardness curve shows a W-shape, as shown in Figure 12. The average hardness in the nugget and in the base material was similar among all the samples. However, the results obtained from the experiments [32] motivate an optimization of technological parameters in order to increasing the seam quality, since it promotes a stable and defects-free process. Experimental Results Figures 10 and 11 illustrate the characteristics of the surface and in section of the welded sample, which showed the best results in mechanical terms. The pair of process parameters, n = 500 rpm and v = 20 cm/min, led to perform welding with high mechanical characteristics and high quality with no significant defects both on the surface and in the section. Moreover, based on the reference weld pitch (WP = 0.35 mm), which indicated the ratio between the travel speed v and the tool rotation speed n [2], the sample welded with n = 500 rpm and v = 20 cm/min was the one nearest to the optimal ratio (WP = 0.40 mm). In almost all of the tests, the highest value of micro hardness in the stirred zone was found not in the middle of the joint, but shifted towards one of the sides of the joint, where higher plastic strain was observed and the micro hardness curve shows a W-shape, as shown in Figure 12. The average hardness in the nugget and in the base material was similar among all the samples. However, the results obtained from the experiments [32] motivate an optimization of technological parameters in order to increasing the seam quality, since it promotes a stable and defects-free process. Moreover, based on the reference weld pitch (WP = 0.35 mm), which indicated the ratio between the travel speed v and the tool rotation speed n [2], the sample welded with n = 500 rpm and v = 20 cm/min was the one nearest to the optimal ratio (WP = 0.40 mm). In almost all of the tests, the highest value of micro hardness in the stirred zone was found not in the middle of the joint, but shifted towards one of the sides of the joint, where higher plastic strain was observed and the micro hardness curve shows a W-shape, as shown in Figure 12. The average hardness in the nugget and in the base material was similar among all the samples. However, the results obtained from the experiments [32] motivate an optimization of technological parameters in order to increasing the seam quality, since it promotes a stable and defects-free process. Model Validation In order to evaluate the reliability of the mechanical properties predicting by simulation model, the features of the weld bead in output by ANN was compared to experimental results. For this purpose, the Mean Absolute Percentage Error (MAPE) was calculated for the two ANNs modeled. The results obtained from the use of models (Table 5) demonstrated that the predicted parameters ensure a higher level of reliability. Therefore, the neural networks were able to predict, with significant accuracy, the mechanical properties of the friction stir welding joints, under a given set of welding conditions. Since the results achieved in the experiments have shown the most significant process parameters for the friction stir welding process applied on AA5754 H111 plates [32], the development of this ANN model could be used to identify the optimal process parameters setting in order to achieve the desired welding quality. Conclusions The results showed that the simulation model could be used as an alternative way for predicting, controlling and monitoring the FSW process. The MAPE obtained for the outputs micro hardness (HVHAZ) and ultimate tensile strength (UTS) were, respectively, 0.29% and 9.57%. The R 2 values were in all cases very high, in particular, R 2 values corresponding to 0.94 for the prediction of HAZ parameter and 0.96 in case of the UTS forecast were observed. Although the prediction of UTS was characterized by higher level of MAPE if compared to HAZ estimated value, it was, however, considered acceptable under technological perspective. According to a prudential approach, it is Model Validation In order to evaluate the reliability of the mechanical properties predicting by simulation model, the features of the weld bead in output by ANN was compared to experimental results. For this purpose, the Mean Absolute Percentage Error (MAPE) was calculated for the two ANNs modeled. The results obtained from the use of models (Table 5) demonstrated that the predicted parameters ensure a higher level of reliability. Therefore, the neural networks were able to predict, with significant accuracy, the mechanical properties of the friction stir welding joints, under a given set of welding conditions. Since the results achieved in the experiments have shown the most significant process parameters for the friction stir welding process applied on AA5754 H111 plates [32], the development of this ANN model could be used to identify the optimal process parameters setting in order to achieve the desired welding quality. Conclusions The results showed that the simulation model could be used as an alternative way for predicting, controlling and monitoring the FSW process. The MAPE obtained for the outputs micro hardness (HV HAZ ) and ultimate tensile strength (UTS) were, respectively, 0.29% and 9.57%. The R 2 values were in all cases very high, in particular, R 2 values corresponding to 0.94 for the prediction of HAZ parameter and 0.96 in case of the UTS forecast were observed. Although the prediction of UTS was characterized by higher level of MAPE if compared to HAZ estimated value, it was, however, considered acceptable under technological perspective. According to a prudential approach, it is possible to overstate the prediction of UTS of the 10%, in this way a high reliability of the model for many industrial environments will be ensured. The adoption of the simulation model can be very useful for the friction stir welding process. Nowadays, for this kind of process and materials, in the scientific literature, there are no models able to predict the mechanical parameters on the basis of weld process parameters. The approach suggested in this paper allows identifying one or more unknown parameters starting from experimental data, and the reliability of the forecasting model increases with increasing of the amount of available data. This means that as time passes, upgrading the model with new data (this is possible by means of a user friendly ANN-interface), the prediction model improves the reliability of the forecasting and will allow optimizing the quality of the weld joints through the identification and the control of process parameters. In many cases, the reduction of weld defects translates into greater safety and in a decrease of the repairs, providing predict a drastic reduction of the costs related to additional testing that can be replaced thanks to the estimation of the mechanical properties of the joints. In this case, a small sample size for the prediction of the target parameters was adopted, and the forecasts of HV HAZ and UTS were considered to be acceptable under technological perspective. In order to improve the reliability of the model, it is necessary, therefore, to adopt an increased number of experimental data for training, validation and test phase. Future research, moreover, could be conducted adopting other software packages, which are able to provide the internal process of learning which describe the reasoning behind a prediction provided by neural networks, as, in this case, Alyuda software could not provide it. Finally, this work suggests that the full integration of analysis, prediction, control and continuous learning into a single framework is promising, not only in friction stir welding process but also in the prospect of other manufacturing technologies. Author Contributions: Livia M. Serio and Francesco Facchini conceived, designed and elaborated the ANN models; Livia M. Serio and Luigi A. C. De Filippis performed the welding, the characterization of joints and wrote the paper; and Giovanni Mummolo and Antonio D. Ludovico oversaw the paper. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2018-04-03T00:17:23.546Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "64e40018c44e239f690dfe6a977cf79b84293c16", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/9/11/915/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64e40018c44e239f690dfe6a977cf79b84293c16", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
208333834
pes2o/s2orc
v3-fos-license
Role of CX3CL1/CX3CR1 Signaling Axis Activity in Osteoporosis Osteoporosis is a civilization disease which is still challenging for contemporary medicine in terms of treatment and prophylaxis. It results from excessive activation of the osteoclastic cell line and immune cells like macrophages and lymphocytes. Cell-to-cell inflammatory information transfer occurs via factors including cytokines which form a complex network of cell humoral correlation, called cytokine network. Recently conducted studies revealed the participation of CX3CL1 chemokine in the pathogenesis of osteoporosis. CX3CL1 and its receptor CX3CR1 present unique properties among over 50 described chemokines. Apart from its chemotactic activity, CX3CL1 is the only chemokine which may function as an adhesion molecule which facilitates easier penetration of immune system cells through the vascular endothelium to the area of inflammation. The present study, based on world literature review, sums and describes convincing evidences of a significant role of the CX3CL1/CX3CR1 axis in processes leading to bone mineral density (BMD) reduction. The CX3CL1/CX3CR1 axis plays a principal role in osteoclast maturation and binding them with immune cells to the surface of the bone tissue. It promotes the development of inflammation and production of many inflammatory cytokines near the bone surface (i.e., TNF-α, IL-1β, and IL-6). High concentrations of CX3CL1 in serum are directly proportional to increased concentrations of bone turnover and inflammatory factors in human blood serum (TRACP-5b, NTx, IL-1β, and IL-6). Regarding the fact that acting against the CX3CL1/CX3CR1 axis is a potential target of immune treatment in osteoporosis, the number of available papers tackling the topic is certainly insufficient. Therefore, it seems justified to continue research which would precisely determine its role in the metabolism of the bone tissue as one of the most promising targets in osteoporosis therapy. Introduction Osteoporosis results from the disturbance of balance between the development of bone tissue and its resorption. The phenomena are induced by two types of cells: osteoblasts with their anabolic properties and osteoclasts responsible for osseous tissue catabolism [1,2]. Excessive activation of the osteoclastic cell line is induced via an increased inflammatory activity of numerous cell types-macrophages, lymphocytes, fibroblasts, osteoblasts, and osteoclasts (autocrine and paracrine mechanism) at the cellular level [3]. Cell-to-cell inflammatory information transfer occurs via factors including inflammatory cytokines which form a complex network of cell humoral correlation, called "cytokine network," with anti-inflammatory cytokines, growth factors, and other molecules [4]. As regards the pathophysiology of osteoporosis, extensive research has been documented for the catabolic activity of numerous inflammatory cytokines, including IL-1β (interleukin 1 beta), TNF-α (tumor necrosis factor α), IL-6 (interleukin 6), IL-12 (interleukin 12), and IL-17 (interleukin 17), on bone mineral density (BMD) [5]. Research into the causes and optimal treatment of this disease still seems to be a necessary and demanding task due to an increasing percentage of osteoporotic patients (longer lifespan, deteriorating environmental conditions, and medications) and many health complications, e.g., osteoporotic fractures ( Figure 1). Recently conducted studies revealed the participation of CX3CL1 chemokine (CX3C motif chemokine ligand 1, fractalkine) in the pathogenesis of osteoporosis [6,7]. Chemokines belong to the cytokine group. They are low molecular weight proteins participating in regulating the migration of leukocytes and other cells influencing inflammatory process [8]. CX3CL1 and CX3CR1 (CX3C chemokine receptor 1) present unique properties among over 50 described chemokines. Apart from its chemotactic activity, CX3CL1 is the only chemokine which has a different molecular structure and may function as an adhesion molecule which facilitates easier penetration of immune system cells through the vascular endothelium to the site of inflammation [9,10]. Few studies are available worldwide which tackle the issue of laboratory evaluation of the role of CX3CL1 in the pathophysiology of osteoporosis. Therefore, it is justified to organize and critically assess previous findings regarding the subject, especially in the context of possible new osteoporosis therapies. 2. Structure and Function of the CX3CL1/CX3CR1 Signaling Axis CX3CL1 was first described in 1997 [11,12]. According to the latest nomenclature, CX3CL1 is the only representative of the CX3C (delta) subfamily [13]. Human CX3CL1 encoding gene is clustered on chromosome 16q13 [14]. Structurally, it differs from other chemokines by the presence of a motif encompassing three amino acid residues between two cysteine residues forming disulfide bonds which stabilize the tertiary structure of the molecule [11,15]. Two isoforms of CX3CL1 occur in the body: soluble form (sCX3CL1) and membrane-anchored form (mCX3CL1) [11,15]. The presence of two CX3CL1 forms in the body determines its unique role in the cytokine network. sCX3CL1 is a chemotactic factor for NK cells [16], T-lymphocytes [17], monocytes [18], and mastocytes [19]. Neutrophil binding and adhesion is influenced by mCX3CL1, which is particularly condensed within vascular endothelium cells ( Figure 2). Therefore, the manifestation of the biological activity of the CX3CL1/CX3CR1 axis is most commonly observed in wellvascularized organs, such as the periosteum and periarticular tissues (e.g., the synovial membrane) [15]. In endothelial cells, mCX3CL1 functions as an adhesion molecule which facilitates the penetration of immune cells through the vascular endothelium regardless of the integrin-related mechanism [12,13]. This extremely proinflammatory property facilitates a more rapid accumulation of immune system cells at the site of inflammation. Importantly, this property is unique as regards this group of molecules. To date, the participation of the CX3CL1/CX3CR1 axis has been confirmed in such pathologies as rheumatoid arthritis, systemic lupus erythematosus, atherosclerosis, CNS and spinal cord injury, and osteoarthritis (OA) [20][21][22]. Local CX3CL1 synthesis and expression is controlled by many factors, including inflammatory cytokines (IL-1β, interferon-γ, and TNF-α), lipopolysaccharide (LPS), or tissue oxygen tension (tPO 2 ) [23,24]. All the above-mentioned factors activate the network of intracellular transmitters and transcriptive factors, which results in an increased or reduced CX3CL1 synthesis [24]. Biological CX3CL1 activity may become apparent by interacting with CX3CR1 [25]. Structurally, CX3CR1 is one of the metabotropic receptors which are also called G-proteincoupled receptors (GPCR) or so-called 7-transmembrane proteins (7TM) [15]. The polypeptide chain includes 7 αhelical structures penetrating the whole thickness of the cell membrane. Therefore, three segments of the receptor may be distinguished: extracellular, transmembrane, and intracellular segments. Loops formed by the polypeptide chain externally bind ligands such as CX3CL1 [26] and CCL2 (CC chemokine ligand type 2) [27]. C-terminus and loops are located on the cytoplasmatic side which form the site where the heterotrimeric Gαi protein is bound. CX3CR1, like other receptors for chemokines, is characterized by polymorphism which may define its variable affinity for CX3CL1 and present possibilities of selecting targeted therapies [28]. Possible Role of the CX3CL1/CX3CR1 Axis in Osteoclastogenesis An experiment conducted by Koizumi et al. is one of the first studies which indicated the influence of the CX3CL1/ CX3CR1 axis on the development of osteoporotic lesions [7]. 8-week-old mice were used to isolate osteoclast precursors (immature bone marrow cells, splenocytes, precursors of osteoclast line RAW 264.7), which were harvested from the tibia and the femur. Osteoblasts were also isolated from a similar site from 2-week-old mice. Mature osteoclasts (TRAP + (tartrate-resistant acid phosphatase positive cells)) were obtained from osteoclast precursor cells after using RANKL (receptor activator for nuclear factor κ B ligand). All three types of cells (osteoclast precursors, mature osteoclasts, and osteoblasts) were then incubated with α-MEM (minimum essential medium eagle-alpha modification) and 10% FBS (fetal bovine serum), and further analyses were performed. First, CX3CR1 expression was assessed with the RT-PCR (reverse transcription-polymerase chain reaction) method in osteoclast precursor cells and mature osteoclasts. Marked overexpression and increased condensation of CX3CR1 were observed in precursor osteoclast cells. Such a tendency was not demonstrated in samples containing mature osteoclasts. A similar observation was made as regards samples of bones harvested from orthopedic patients, in which numerous CX3CR1 + cells (osteoclast precursors) were found. They tended to form CX3CL1 + osteoblast accumulations closer to bone surfaces. The authors suggested that the presence of CX3CR1 on osteoclast precursors indicated the effect of the CX3CL1/CX3CR1 axis on their differentiation into mature osteoclasts. It results from the fact that the use of anti-CX3CL1 mAB (mouse antibodies) markedly reduced the percentage of osteoclast precursors which differentiated into mature osteoclasts. Moreover, it was noted that those mature osteoclasts which expressed CX3CL1 formed numerous accumulations with CX3CR1 + cells (monocytes/ macrophages) close to the bone surface. Over the first phase of the activation of sCX3CL1 immune cells, they are attracted towards the bony surface region including osteoclasts and other CX3CL1 + cells, such as vascular endothelium cells and dendritic and epithelial cells. During the second phase, CX3CR1 + macrophage/monocyte receptors are bound to osteoclast mCX3CL1 which leads to the activation of a local inflammatory response and the migration of monocytes/macrophages to the bone tissue. Subsequent analyses by Koizumi et al. included a more detailed demonstration of the role of the CX3CL1/CX3CR1 axis in the development of osteoclasts via osteoblast binding. The addition of the CX3CL1 murine recombinant chemokine domain did not show an increased maturation of osteoclasts in the study samples. It is suggestive of the fact that the CX3CL1/CX3CR1 axis functions as a cofactor of the reaction of osteoclast maturation and not a direct stimulant of this reaction. Osteoblast-osteoclast precursor binding is necessary for the conversion of osteoclast precursors into mature osteoclasts, which occurs mainly via the CX3CL1/CX3CR1 axis. The activation of osteoclast precursor maturation into mature osteoclasts is managed by other signaling pathways (e.g., associated with RANKL). This factor may account for the fact that the addition of anti-CX3CL1 mAB produces distinct inhibition of osteoclast maturation, and adding recombined CX3CL1 does not increase the maturation (specific CX3CL1 concentration is necessary to cause binding with CX3CR1 + cells; additive effect is not observed above this concentration). Notably, an increased activation of the CCL2/CCR2 (CC chemokine receptor type 2) axis was observed in the described study apart from an increased condensation of CX3CR1 on osteoclast precursors after using RANKL. The biological manifestation of the CCL2/CCR2 Figure 2: Schematic diagram showing the migration process of CX3CR1 + cells (in this case CD68 + cell) from the intravascular area via the vascular endothelium (CD56 + cells) to the site of inflammation. The chemotaxis process is initiated by the increasing concentration of sCX3CL1 (a). CD68 + cell moves towards endothelium cells that present mCX3CL1 on their surface (b). Binding of the CX3CR1 to mCX3CL1 begins the reaction cycle allowing the CD68 + to start the diapedesis process (c). The end of the migration process occurs when CD68 + is outside of the blood vessel (d). CX3CR1: CX3C chemokine receptor 1; CD: cluster of differentiation; sCX3CL1: soluble form of CX3CL1; mCX3CL1: membrane-anchored form of CX3CL1. signaling axis is generally similar to the CCL2/CX3CR1 and CX3CL1/CX3CR1 signaling pathways [29]. Interestingly, numerous studies confirmed that parathormone, whose increased concentrations are observed in the course of osteoporosis, influences the overexpression of the CCL2/CCR2 signaling axis, which probably indirectly influences an increased effectiveness of CX3CL1 binding to its dedicated receptors [30]. Another study conducted by Kikuta et al. [31] confirmed the participation of the CX3CL1/CX3CR1 axis in promoting osteoclastogenesis which is responsible for bone resorption and BMD reduction. First, the bone marrow was sampled from mice. Next, the researchers isolated three types of cells which are mostly responsible for osteoclastogenesis regulation: CD45 + CD11b + cells (osteoblast precursors), CD45 + hematopoietic cells, and CXCR4 + CD45cells accounting for 28.4%, 58.8%, and 18.0% of sampled bone marrow cells, respectively. Basing on eliminative cultures, it was reported that the population of CXCR4 + CD45cells incubated with RANKL and M-CSF (macrophage colony-stimulating factor) demonstrated the most marked expression of factors influencing osteoclast maturation, including SDF-1 (stromal cell-derived factor 1), CXCL7 (CXC motif chemokine ligand 7), and CX3CL1. In order to confirm the role of CX3CL1 in osteoclastogenesis regulation, the following factors were added to RAW264.7 and bone marrow cells: antibodies against SDF-1, CXCL7, and CX3CL1. The reduction in the number and activity of osteoclasts was noted for all samples. Moreover, an influence of anti-SDF-1 and anti-CX3CL1 antibodies on osteoclast structure was observed-the colonies did not include large multinucleated mature osteoclasts but small and medium-sized less mature forms. The authors suggested that the CX3CL1/CX3CR1 axis may be responsible for the activation and fusion of osteoclasts in forming multinucleated mature structures. The role of the CX3CL1/CX3CR1 axis in bone tissue resorption may be corroborated by a study by Kikuta et al. who used CX3CR1-EGFP + knock-in mouse cells (enhanced green fluorescent protein-positive cells) in the assessment of the influence of active vitamin D (1,25-D) and its analogue-eldecalcitol (ELD)-on the expression of S1PR1 (sphingosine-1-phosphate receptor 1) [32]. CX3CR1-EGFP + and CSF1R-EGFP + (colony-stimulating factor 1 receptor) cells were recognized in the study model as the most precise model equivalent to osteoclast precursors. According to the authors, S1PR1 present on CX3CR1-EGFP + cells promotes their migration from bones to the blood, which prevents the accumulation of osteoclast precursors in the bone tissue and catabolic activity. The second receptor for S1P (sphingosine-1-phosphate), S1PR2 (sphingosine-1-phosphate receptor 2), is S1PR1 antagonist [33] causing the reduction in the mobility of CX3CR1-EGFP + cells which remain within the bone tissue. The activity of 1,25-D and ELD contributed to diminishing S1PR2 expression in CX3CR1-EGFP + cells, which corroborated their therapeutic protective effect in the osseous tissue. No studies have been performed to analyze the direct correlation between S1P and the CX3CL1/CX3CR1 axis within the bone tissue. However, an antagonistic relation between those molecules may be observed in other tissues. A study conducted on cells, including those of the endocardium and endothelium, under in vivo conditions demonstrated that the use of S1P markedly decreased TNF-α-induced CX3CL1 expression [34]. It is a cellular anti-inflammatory/protective reaction which may largely influence the treatment of circulatory system pathologies. Presumably, a similar analogy between two cells also occurs within the bone tissue, which may be another confirmation of the influence of the CX3CR1/CX3CL1 axis on catabolic activity towards the bone tissue. However, it requires further research. Mansell et al. [35] demonstrated a correlation between CX3CR1-EGFP + cells (equivalent to osteoclast precursors) sampled from murine bone marrow and Debio0719, which is a lysophosphatidic acid (LPA) antagonist. LPA is a phospholipid whose activity is similar to growth factors for numerous types of cells, including nerve, epithelial, muscular cells, vascular endothelium cells, chondrocytes, osteoblasts, and osteoclasts [36]. The above-mentioned cells have 6 types of receptors for LPA, i.e., LPAR 1-6 (lysophosphatidic acid receptor 1-6), which activate intracellular signaling pathways dependent on Gα i , Gα 12/13 , Gα q , and Gα s . The presence of LPAR 1 , LPAR 2 , LPAR 4 , LPAR 5 , and LPAR 6 was noted in CX3CR1 + cells. According to the study, the most marked influence on the osseous tissue may be contributed to LPAR 1 [37]. LPAR 1 inhibition reduces the expression of osteoclast markers including adhesive β3 integrin and the tendency of osteoclast precursors towards maturation and fusion into multinucleated forms. It results in lowering the percentage of CX3CR1 + cells and mature osteoclasts at the bony surface because of their increased mobility to other tissues, which should be perceived as an anticatabolic activity. The reduced expression of β3 integrin in CX3CR1 + cells diminishes their potential of binding cell membranes, indirectly lowering the effectiveness of the CX3CL1/CX3CR1 pathway which has synergistic properties. The experiment may indicate a possible correlation between the CX3CL1/CX3CR1 axis and LPA/LPAR 1 in the promotion of osteoclastogenesis processes, at the same time indicating the use of LPA antagonists to be a highly promising direction in the treatment of osteoporosis. CX3CR1 expression on osteoclast precursors with the use of REV-ERBs (nuclear reverse erythroblastosis virus heme receptors) was investigated by Cho et al. [38]. REV-ERBs (REV-ERBα and REV-ERBβ) are a group of transcriptional repressors, which regulate the expression of specific cell genes via binding nuclear compressor and histone deacetylase 3 complex. They are also responsible for the preservation of normal circadian rhythm and lipid conversion [6]. In this study, following REV-ERB supplementation, the culture of murine bone marrow-derived macrophages (BMMs) manifested a significantly lower expression for CX3CR1 (via inhibiting mRNA for the CX3CL1/CX3CR1 axis) than the control group which received the vector. It resulted in a distinct reduction in RANKL-induced osteoclastogenesis. The use of SR9009, which is a synthetic REV-ERB agonist, triggers a similar activity. Moreover, it leads to the shift in differentiating BMMs from inflammatory M1 macrophages to M2 macrophages which are characterized by anti-inflammatory activity. It is another example of the key role of the CX3CL1/ CX3CR1 axis in the induction of inflammatory processes and osteoclastogenesis which are directly responsible for the development of osteoporotic lesions. The Activity of the CX3CL1/CX3CR1 Axis in the Course of Osteoporosis Basing on the available data, Chen et al. attempted to assess the concentrations of CX3CL1 in the blood serum (ELISA method) of patients with postmenopausal osteoporosis (n = 53, study group, PMOP) compared to healthy postmenopausal patients (n = 51, PMNOP) and healthy premenopausal patients (n = 50)-2 control groups [6]. The obtained data were correlated with the markers of osteoporosis development, i.e., the level of bone turnover factors and inflammatory factors in blood serum like TRACP-5b It is also emphasized that longitudinal studies and additional research should be conducted in larger groups of patients. From the viewpoint of the present authors, it would be beneficial to obtain osteoclast cell lines from PMOP and PMNOP patients for cultures and assessment of CX3CR1 expression. It is advisable to assess the concentrations of growth factors responsible for BMD increase and correlating negatively with the CX3CL1/CX3CR1 axis, such as VEGF or TGF-β [39,40]. It might constitute an even stronger confirmation of the role of the CX3CL1/CX3CR1 axis in the pathogenesis of osteoporosis and indicate CX3CL1 as the best immunological marker of the risk of osteoporosis or prognosis in its course. Conclusions and Future Perspectives Osteoporosis is a civilization disease which is still challenging for contemporary medicine not only in terms of treatment but also prophylaxis. It is closely related to decreasing the quality of life and numerous complications resulting from BMD reduction, including fractures and accelerated progress of osteoarthritis [41,42]. Osteoporotic lesions also occur in numerous articular pathologies, such as rheumatoid arthritis or hemophilic arthropathy [43][44][45]. It results in very serious secondary consequences comprising the limitation or complete immobilization of patients, cardiovascular, infectious, or thrombotic complications [46]. Therefore, osteoporosis should be viewed as a disease which increases mortality, especially in elderly patients. It is assumed that the causes of osteoporosis are commonly known (endocrine disorders, neoplastic diseases, excessive alcohol consumption, smoking, and malnutrition), but attempts at assessing its etiology basing on the immune system are relatively rare. Another difficulty is connected with the location of immune processes in the dependency network with other causes of the disease. The resultant question is whether the activity of inflammatory/catabolic factors is caused by such factors as endocrine changes or endocrine changes are triggered by primary disturbance of balance between inflammatory and anti-inflammatory cytokines. It is difficult to indicate whether the role of immune processes constitutes the primary or secondary reason for the development of osteoporotic lesions. However, the expression of signaling axes associated with inflammatory factors, such as the CX3CL1/CX3CR1 axis, is certainly increased in osteoporosis ( Table 1). The present study includes convincing evidence of a significant role of this axis in processes leading to BMD reduction. Osteoclast precursors are characterized by an increased condensation of CX3CR1 on the surface, which after binding mCX3CL1 play a principal role in osteoclast maturation (development of multinucleated forms) and binding them to the surface of the bone tissue, where they may manifest their catabolic activity. The role of sCX3CL1 consists in attracting not only osteoclast precursors to the osseous tissue but also other immune system cells, which form numerous accumulations activated on the bony surface (via CX3CR1), promoting the development of inflammation and production of other inflammatory cytokines (i.e., TNF-α, IL-1β, and IL-6). The CX3CL1/CX3CR1 axis also plays a key role in the maturation of osteoclast precursors via binding osteoblasts. The overexpression of this axis may recruit considerable quantities of osteoblasts bound with the bony surface to osteoclastogenesis which shifts osseous tissue metabolism towards catabolism. All the observations were noted on murine models, both in vitro and in vivo, especially through the analysis of the use of specific anti-CX3CL1 antibodies. In each case, after using the antibody, cellular colonies did not present mature multinucleated osteoclasts, but only smaller intermediate forms without chemotaxis towards the bony surface. Apart from the direct correlation between the CX3CL1/ CX3CR1 axis and activation/maturation of osteoclasts and induction of inflammatory processes, the CX3CL1/CX3CR1 axis may indirectly reduce BMD in a more subtle manner. Highly possible suppressive S1P activity on TNF-α-induced CX3CL1 expression may serve as an example, which contributes to a higher degree of the migration of osteoclast precursors beyond the osseous tissue. Therefore, the stimulation of the S1P/S1PR1 axis sensitive to vitamin D3 and its analogues presents antagonistic properties to the CX3CL1/CX3CR1 axis. Reducing vitamin D3 concentrations in blood serum reported in the course of osteoporosis may contribute to unblocking and increasing the expression of the CX3CL1/ CX3CR1 signaling pathway, which, according to previous research, stimulates the process of osteoclastogenesis and further BMD reduction. Notably, the process of accumulating osteoclast precursors in the osseous tissue and its maturation are not only influenced by inflammatory factors but also molecules with growth potential, such as LPA. The LPA/LPAR 1 pathway stimulates the osteoclastogenesis of CX3CR1 + cells and the expression of osteoclast markers, e.g., β3 integrin. The marker demonstrates synergy with the CX3CL1/CX3CR1 axis as regards increasing the tendency towards binding cell membranes of various cell types and more effective osteoclast maturation. Inhibiting LPAR 1 (e.g., by Debio0719) results in weakening the CX3CL1/CX3CR1 signaling pathway which again cor-roborates the participation of this axis in the development of osteoporosis. Inhibiting the process of osteoclastogenesis through blocking mRNA for the CX3CL1/CX3CR1 axis in osteoclast precursors (using REV-ERBs or SR9009) is another strong piece of evidence of its participation in immune processes leading to BMD reduction. In spite of the availability of promising and rather unambiguous data obtained from murine models, only one study concerning the correlation between CX3CL1 concentration in blood serum and osteoporosis-related markers has been conducted in humans. The study also indicated a high probability of CX3CL1 contribution to the degree of the intensification of osteoporotic lesions. The highest concentrations of CX3CL1 were observed in PMOP patients, and it was directly proportional to increased concentrations of bone turnover and inflammatory factors in blood serum (TRACP-5b, NTx, IL-1β, and IL-6). In those patients, the lowest BMD values were observed in locations which are typically prone to osteoporotic fractures (e.g., proximal femur, L-S spinal segment) measured with DXA. The remaining groups of patients (PMNOP and healthy premenopausal women) had significantly lower concentrations of CX3CL1 in blood serum which correlated with lower concentrations of bone turnover markers and a higher BMD in DXA examination. Therefore, the role of the CX3CL1/CX3CR1 axis in the development and exacerbation of osteoporotic lesions seems very well documented at present. Several disadvantages of the research may be listed: analyses conducted mainly on murine models and not in humans, small study group size, and lack of longitudinal studies or fully standardized laboratory tests. Few research papers have been published on the possible role of the CX3CL1/CX3CR1 axis in promoting the process of osteoclastogenesis, although it was first described in 2009 [7]. Regarding the fact that acting against the CX3CL1/ CX3CR1 axis is a potential target of immune combination treatment in osteoporosis, the number of available papers tackling the topic is certainly insufficient. The results of the present study necessitate asking new questions as regards the role of the CX3CL1 axis. It would be of great value to assess CX3CR1 condensation not only on osteoclast precursors and osteoblasts but also on cells which naturally surround the osseous tissue, i.e., fibroblasts or vascular endothelium, which activity may trigger the shift of inflammation from the circulatory system (systemic) to the bone tissue (local). The network of blood vessels supplying the bone also plays a considerable role in osteoporosis. The observation of the correlation of the concentration of VEGF and other growth factors in the vascular endothelium compared to CX3CL1 concentrations in blood serum and DXA results might broaden the knowledge regarding the role of CX3CL1/CX3CR1 as an axis which not only activates osteoclastogenesis but also affects on local perfusion and bone nourishment [47]. The research conducted by Cho et al. [38] concerning the effect of the CX3CL1/CX3CR1 axis on mRNA expression leads to the question whether gene polymorphism for CX3CL1/CX3CR1 contributes to osteoclastogenesis and a possible risk of developing osteoporosis in the future. Determining such a correlation might lead to a situation where a simple genetic test of the blood would indicate risk groups in which appropriate prophylaxis and follow-up examinations might be introduced. Osteoporosis is a multifactorial disease, so moderate optimism is recommended as regards the decisive role of the CX3CL1/CX3CR1 axis in the pathophysiology of this disease. The possibility of treatment based on specific antibodies is not expected to be the sole therapeutic method. Most probably, it may become a valuable element of combination treatment. Primary or secondary overexpression of the CX3CL1/CX3CR1 axis is only one of the numerous possible causes of osteoporosis, but it seems to play a central role in the context of immunity-related causes. Therefore, it seems justified to continue research which would precisely determine its role in the metabolism of the bone tissue as one of the most promising targets in osteoporosis therapy.
2019-11-14T17:07:28.292Z
2019-11-12T00:00:00.000
{ "year": 2019, "sha1": "e48eb64441394a287f4cb1d65dec82b50eab6bab", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mi/2019/7570452.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8facc81c26af54f87facd8b642e5c63ae69bb69f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236136354
pes2o/s2orc
v3-fos-license
Impact of human immunodeficiency virus on pulmonary vascular disease With the advent of anti-retroviral therapy, non-AIDS-related comorbidities have increased in people living with HIV. Among these comorbidities, pulmonary hypertension (PH) is one of the most common causes of morbidity and mortality. Although chronic HIV-1 infection is independently associated with the development of pulmonary arterial hypertension, PH in people living with HIV may also be the outcome of various co-morbidities commonly observed in these individuals including chronic obstructive pulmonary disease, left heart disease and co-infections. In addition, the association of these co-morbidities and other risk factors, such as illicit drug use, can exacerbate the development of pulmonary vascular disease. This review will focus on these complex interactions contributing to PH development and exacerbation in HIV patients. We also examine the interactions of HIV proteins, including Nef, Tat, and gp120 in the pulmonary vasculature and how these proteins alter the endothelial and smooth muscle function by transforming them into susceptible PH phenotype. The review also discusses the available infectious and non-infectious animal models to study HIV-associated PAH, highlighting the advantages and disadvantages of each model, along with their ability to mimic the clinical manifestations of HIV-PAH. INTRODUCTION HIV-infected patients with controlled viral load currently have a life span comparable to a normal person. Unfortunately, this increased life span is associated with chronic non-infectious co-morbidities that have introduced new challenges in the care and treatment of individuals living with HIV. Among these co-morbidities, cardiovascular disease (CVD) and chronic pulmonary diseases such as pulmonary hypertension (PH), chronic obstructive pulmonary disease (COPD), gas exchange abnormalities, and asthma are most prevalent. Persistent chronic inflammatory conditions, dysregulated immune system, and oxidative stress leading to endothelial dysfunction, are some of the most studied factors responsible for HIVassociated chronic pulmonary complications. Among various associated pulmonary diseases, pulmonary arterial hypertension (PAH) is considered to be the most devastating, with the worst prognosis and survival rates. Although chronic HIV-1 infection is independently associated with the development of Group I pulmonary hypertension (PH), the possibility of the presence of Group II and Group III forms of PH, due to left heart disease and chronic obstructive pulmonary disease commonly observed in these individuals, is also discussed. In this article, we also examine the association of various HIV proteins with pulmonary vascular injury to understand the pathophysiology involved. In addition, we have summarized the known molecular mechanisms involved in the pathophysiology of HIV-PH based on cell culture and various animal model studies. High prevalence of HIV-associated PAH The prevalence of group I PH, PAH in people living with HIV (PLWH) is alarmingly high compared to the general population. Using echocardiography as the primary diagnostic tool, a Swiss study determined that the prevalence of PAH was 0.5% in a cohort of 1,200 HIV-infected patients during the pre-HAART 1 . A study that analyzed echocardiographic data collected from a large group of individuals enrolled in the Veterans Aging Cohort Study produced findings that may be useful in the monitoring and screening of HIV-infected patients who are at risk of developing cardiopulmonary complications. The study demonstrated that HIV-infected subjects with high viral loads (> 500 copies/ml) and low CD4 cell counts (<200 cells/ µl) were more likely to exhibit a pulmonary artery systolic pressure (PASP) exceeding the 40-mmHg threshold than uninfected subjects 2 . More recently and with antiretroviral therapy emerging as the standard of care, a multicenter study conducted in France found the prevalence of PAH to be 0.46% in a population of 7,648 HIV-infected patients 3 . The diagnosis of PAH in patients with unexplained dyspnea was made by measuring tricuspid regurgitant jet velocity and mean arterial pulmonary pressure using transthoracic Doppler echocardiography and right heart catheterization, respectively, according to the WHO requirement for defining PAH 4,5 . In contrast, the prevalence of PAH in the general population was 0.0015% based on the findings of a French national registry published a few years before the above study. However, smaller cohort studies that relied exclusively on echocardiography to make a diagnosis of PAH have yielded even higher prevalence rates. A cohort of 374 HIV-positive patients showed a prevalence of 6.1% PAH when determined by echocardiography 6 . Using a PASP of 40 mmHg as the diagnostic threshold, Isasti et al. found a prevalence rate of 2.6% 7 . Further, Quezada et al. defined a right ventricular pressure higher than 35 mmHg as PAH resulting in a prevalence of diagnosed chronic lung disease in HIV-positive individuals 24,25 and with the advancing age of the HIV-infected population, it is becoming increasingly prevalent 26 . Despite antiretroviral therapy (ART) and smoking cessation, the lung function of tobacco users with HIV continues to decline [27][28][29] . Given this role of HIV in COPD, it is not surprising to find an increased incidence of COPD and pulmonary hypertension/rightsided heart failure (Group III PH) in PLWH 30 . The prevalence of Group III PH depends on the severity of the disease, but also on the definition of PH and the method of diagnostic assessment. Spirometry data derived from the Global Initiative for Chronic Obstructive Lung Disease stage IV showed that up to 90% of patients at this stage had mPAP > 20 mmHg, with the majority having pressures ranging from 20 to 35 mmHg. About 1-5% of COPD patients had mPAP> 35-40 mmHg at rest 31 . In addition, exercise-induced PAH in COPD may be due to comorbid left heart disease. There are a cluster of patients exhibiting a ''pulmonary vascular COPD phenotype'', characterized by less severe airflow limitation, hypoxemia, reduced lung diffusing capacity (D LCO ), hypocapnia, and exercise limitations due to cardiovascular dysfunction 32 . It has previously been established that the presence of PH has a stronger association with mortality in COPD than forced expiratory volume in 1s (FEV 1 ) or gas exchange variables 33 . In addition, an enlarged pulmonary artery diameter, as detected by computed tomography (CT) scan predicts hospitalization due to acute COPD exacerbation 34 . Are Group III PH -associated vascular dysfunction and endothelial damage the chicken or the egg in disease generation and progression? Although smoking is still the prevailing cause of the decline in lung structure and function, previous experimental studies have provided supporting evidence for the hypothesis that PH and emphysema may have common vascular (endothelial) commonalities. For instance, Seimetz et al. have shown that PH precedes emphysema formation in a cigarette smoking model of small rodents and that this commonality was based on endothelial iNOS upregulation and eNOS downregulation, which led to the loss of vascular structures mediated by programmed cell death (apoptosis) 35 . Indeed, inducing endothelial cell death was found to be sufficient to cause both PH and emphysema in rodent models 36,37 . Furthermore, EMAP II, an endothelial cell-specific proapoptotic protein, was demonstrated to be both necessary and sufficient to cause an emphysematous phenotype in small rodents exposed to cigarette smoke 38 . Interestingly, endothelial-specific transgenic expression of the HIV accessory protein Nef (functions of HIV-Nef are discussed in more detail in a later section) can also cause pulmonary emphysema which is associated with Nef-induced EMAP II upregulation 39 . The close association of vascular damage with both emphysema and PH formation is underlined by models showing the loss of microvascular structures in the lung parenchyma preceding changes in pulmonary arteries and PH 40 , thus further highlighting the complex nature of Group III PH. In this context it is noteworthy that even under moderate exercise conditions, COPD patients may show a rapid rise in mPAP, indicating loss of lung vasculature, vascular distensibility and/or vessel recruitment capability 30 . Left heart disease-related PH Pulmonary hypertension due to left heart disease (LHD) under the 1998 classification was referred to as pulmonary venous hypertension and classified under Group II PAH 41,42 . Based on underlying etiology, Group II PH patients were generally classified under two main categories i.e., those with end-stage heart failure, or those with mitral valve disease. However, recently this was extended to heart failure (HF) patients with preserved ejection fraction (HFpEF) 43 . In the past, mitral stenosis was considered to be the major cause of PAH-LHD, however with the advent of valve-replacement surgeries and decrease in rheumatic heart disease, the focus of PAH-LHD has moved towards heart failure due to ischemia-reperfusion injury. These complications result in elevated filling pressure in the left atria and subsequently lead to a retrograde increase in pulmonary venous, arterial, and capillary pressures. Patients with PH due to LHD (PAH-LHD) show an increase in pulmonary artery pressure (PAP) secondary to increased pulmonary capillary wedge pressure (PCWP). In patients with PH-LHD, PCWP is found to be elevated (> 15 mmHg) and therefore it is often termed as post-capillary PH, contrary to Group I PAH, which is generally referred to as pre-capillary PAH 44 . Cardiomyopathy with decreased left ventricular ejection fraction (LVEF) -with or without symptoms of heart failure -is considered a primary long-term cardiac complication in HIV patients. In addition to cardiomyopathy, HIV is also known to be a key risk factor for acute myocardial infarction (AMI), further leading to heart failure 45,46 . Nonetheless, studies have shown HIV represents a key risk factor for heart failure independent of AMI 47 . With the advent of ART, the number of cardiomyopathy patients with systolic dysfunction has reduced and the number of patients with diastolic dysfunction are on the rise 48,49 . A study by Hsue et al. based on echocardiographic measurements of 192 HIVinfected patients and 52 uninfected individuals, found a higher prevalence of diastolic dysfunction and increased left ventricular mass in HIV-infected individuals associated with low CD4 nadir 50 . The SUN Study (Study to Understand the Natural History of HIV/AIDS in the Era of Effective Therapy) that reported PAH in 57% HIV-infected patients reported LV systolic dysfunction in 18% and diastolic dysfunction in 26% of HIV-infected subjects 22 . In an observational veteran aging cohort study, increased risk of heart failure with reduced ejection fraction (HFrEF) and heart failure with preserved ejection fraction (HFpEF) were observed in HIV-infected individuals on HAART when compared to values from uninfected individuals. Lower viral load was noted to be a risk factor only for HFrEF, which was observed to manifest at a younger age compared to uninfected individuals 51 . However, both HFpEF and HFrEF were associated with post-capillary PAH, increasing the morbidity and mortality risk in both groups of heart failure 52 . Primarily PAH due to left heart failure involves increased left atrium and ventricular filling pressure as a consequence of compromised left ventricular systolic or diastolic dysfunction, leading to elevated pulmonary arterial pressure, vasoconstriction, and arterial vascular remodeling. The mechanisms involved in left ventricular failure/dysfunction are multifactorial and may or may not involve myocarditis. Direct infection to the myocardium, opportunistic infections, nutritional disorders, autoimmune disorder, and the effect of ART are some of the underlying principles in left ventricular failure associated with HIV. HIV patients are susceptible to opportunistic infections including Cytomegalovirus, Nocardia and Cryptococcus 53 . Direct infection to the myocardium by these may lead to altered LV function secondary to infection-associated pericardial effusion 54 . Reports of HIV infecting the myocardium are rare, but systemic inflammation secondary to HIV-1 results in the release of cytokines such as TNF-α, iNOS and IL-β, resulting in inflammationinduced cardiomyopathy in HIV patients 55,56 . In addition, HIV-infected individuals with low CD4 levels have a higher degree of systemic inflammation and immune activation associated with the risk of heart failure 57 . Several studies also correlate the role of HAART in children and adults with left ventricular dysfunction, which has been reported to cause a decrease in LV mass perhaps due to mitochondrial dysfunction 58,59 . Importantly, substance abuse -especially methamphetamine, cocaine, and alcohol -are generally considered to aggravate the risk of HIV-induced HF 51 . In conclusion, LHD is found to be common among HIV patients and according to the newer hemodynamic definition of PAH with the criteria of mPAP > 20 mmHg, it is likely to see a rise in the prevalence of PH among HIV-infected population. Implementing the strategies to identify the individuals at risk for heart failure and PH are needed to minimize the prevalence of cardiovascular diseases in PLWH. Co-infections In spite of the highly effective combination of anti-retroviral drugs which checks the viral load and reduces mortality and morbidity in PLWH, opportunistic co-infections are still one of the major causes of death in these individuals in countries with emerging economies. Opportunistic infections are disproportionately higher in HIV-infected individuals from sub-Saharan Africa, including hepatitis B virus (HBV), hepatitis C virus (HCV), Cryptococcus neoformans, and Pneumocystis jirovecii. Hepatitis C virus is known to increase the risk of pulmonary hypertension in PLWH 60 . In a cross-sectional study conducted on 6,032 veteran participants, it was found that individuals with co-infection of HIV/HCV had higher PASP than uninfected individuals 61 . Parikh et al. 62 linked the association of increased prevalence of PAH in HIV/ HCV co-infected individuals with up-regulation of miRNA-21(miR-21) in plasma. Pneumocystis pneumonia (PCP) remains a common diagnosis in HIV-infected adults 63 . The initial exposure to Pneumocystis jirovecii occurs in childhood and presents as a self-limited, upper respiratory infection that triggers an immunogenic response 64 . However, immunocompromised patients can become re-infected and present with lowgrade fever, dyspnea, non-productive cough, elevated lactate dehydrogenase (LDH) serum levels, and ground-glass opacities on computed tomography (HRCT). Low CD4 cell counts, suboptimal antiretroviral drug coverage and a large pool of population devoid of basic healthcare facilities has made those from developing economies more vulnerable to the PCP. Studies have shown high seroprevalence of P.jirovecii in African children 65 and adults 66,67 . The gold standard for the diagnosis of PCP involves specimen collection via bronchoscopy with bronchoalveolar lavage (BAL), and the microscopic detection of cysts using immunofluorescence staining. In recent years, less invasive diagnostic tests have been implemented, including PCR assays, (1 →3)-beta-D-glucan serology testing and ELISA 63,64 . Prophylaxis with trimethoprim-sulfamethoxazole is recommended in patients with a CD4 count below 200 cells/µl, or previous history of AIDS-defining illnesses, such as oral candidiasis, as well as in patients with a CD4 percentage below 14% 68 . Of note, following the introduction of ART, bacterial pneumonia has replaced PCP as the most frequent HIVassociated pneumonia in the United States 67,69 . ART and trimethoprim-sulfamethoxazole have been reported to reduce the risk of bacterial pneumonia, whereas lower CD4+ T cell count and cigarette smoking correlated with increased risks 68 . Among parasitic diseases, schistosomiasis presents an enormous burden in underdeveloped countries. Schistosome eggs get lodged in tissues and induce granuloma formation responsible for the pathology of the disease. Cross-sectional studies have shown that schistosomiasis may increase the risk of HIV infection 70,71 . Infection with Schistosoma mansoni is also known to potently induce PAH, and coinfection with HIV in the African continent -where Schistosoma is endemic in certain areas due to infested fresh water resources -is expected to further increase the incidence of PAH. However, studies addressing increased PAH in this patient population are still under way 72 . Importantly, schistosomiasis can also impair the response to antiretroviral therapy among HIV-infected patients and treatment of schistosomiasis in co-infected patients reduced HIV viral replication and increase CD4+ T cell count 72 . Molecular insights into HIV-PAH Prior research has shown that pulmonary vascular remodeling, resulting in the development of HIV-PAH, is chronic inflammation and a bystander effect of HIV-proteins released by infected or latently infected macrophages and T cells [72][73][74][75][76] . Despite the use of several detection methods, like immunohistochemistry, electron microscopy, polymerase chain reaction and DNA in situ hybridization, attempts to identify HIV infection in the lung vascular cells have been unsuccessful 77,78 . Development of PAH in a non-infectious HIV-transgenic (Tg) rat model expressing HIV-1 proteins without active infection further highlights the bystander effect of viral proteins on the pulmonary vasculature 79 . Hence, the focus of the scientific community has mainly been on analyzing the role of HIV viral proteins in the pathogenesis of HIV-PAH 80,81 . Role of chronic inflammation Although with HAART the mortality rate of patients with HIV has decreased significantly, and non-AIDS defining conditions have become the main cause of death among the aging population infected with the virus, an altered immune activity and low-grade inflammation is still seen in patients living with successfully controlled HIV viral loads. Therefore, aging and chronic inflammation have to be considered as contributory factors of pulmonary vascular diseases, and in particular of pulmonary hypertension. Indeed, immune dysregulation has recently been associated with the pathogenesis of pulmonary arterial hypertension (PAH) of various etiologies. Following a pulmonary vascular insult, whether infectious, mechanical, or hypoxic; macrophages and lymphocytes occupy the perivascular space, participating in the process of vascular remodeling 82 . Both innate and adaptive immune systems are activated in HIV-infected patients therefore a low level of sustained immuno-inflammatory condition persists for many years even if the patient has controlled viral load. The frequency of activated T cells, inflammatory cytokines and monocytes are higher in HIV patients compared to healthy subjects. Even a minor increase in these inflammatory biomarkers results in a significant sustained increase in risk for noninfectious disease-related morbidity 83 . Spikes et al. reported perivascular inflammation near the obliterated pulmonary vascular lesions in simian immune-deficiency virusinfected macaques along with increased MCP-1 and IL-8 in plasma of these macaques 84 . Even in PLWH who have a normal CD4 count, increased production of pro-inflammatory cytokines such as IFN-γ can alter the function of immune cells. Lately, extracellular vesicles have also been implicated in mediating the cross-talk between infected inflammatory cells and pulmonary vascular cells 85 . Extracellular vesicles (EVs) are small bi-layered membrane bodies released from all cell types and can serve as a vehicle for transferring proteins, coding and non-coding RNAs, lipids, and metabolites between cells. EVs released by HIV-infected monocyte-derived macrophages are reported to potentiate pulmonary vascular endothelial injury and smooth muscle proliferation 85 , leading to the development of cardiovascular dysfunction 86 . Higher numbers of TGF-β-linked extracellular vesicles were reported in the plasma of HIV-PAH patients compared to uninfected and HIV patients without PAH. Furthermore, Nef-positive EVs have been shown to promote premature senescence and eNOS downregulation through Rac-1-dependent mechanism resulting in coronary endothelial dysfunction 87 . Chronic HIV infection precipitates immune senescence by stimulating the clonal expansion of senescent T-cells, with a distinctive lack of the co-stimulatory protein CD28, and overexpression of CD57. Similar to what is seen in older, uninfected individuals, these senescent cells exhibit shortened telomeres, arrested cell proliferation, and increased production of pro-inflammatory cytokines 88 . Besides its effects on co-stimulatory signals, chronic HIV infection also causes an increase in the expression of co-inhibitory molecules, such as programmed death 1 (PD-1) on CD8+ T cells. PD-1 is a potent modulator of T-cell exhaustion, a phenomenon that commonly arises after chronic antigen exposure and leads to loss of effector function. The level of PD-1 upregulation correlates with markers used to clinically assess HIV disease severity, including viral load and CD4 89 . Disease progression is also linked to upregulation of programmed death-ligand 1 (PD-L1). Compared to uninfected individuals, HIV-positive patients have higher levels of PD-L1 on antigen-presenting cells (APCs) 89,90 . In the case of CD4+ T cells, PD-1, cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) and other inhibitory receptors are highly expressed on the cell surface, thus contributing to the impaired function of CD4+ T cells in response to HIV infection 89 . Under normal conditions, the CD28 co-stimulatory pathway promotes IL-2-dependent expansion of T cells through epigenetic modifications. However, in CD4+ T cells expressing the senescence marker CD57, IL-2 secretion is suppressed by hypermethylation at CpG site 1 of the IL-2 promoter 91 . Along with the reduced CD4/CD8 ratio, triggered by HIV infection, senescent T cells represent a decline in immune function and an increase in the likelihood of opportunistic infections. Role of HIV Nef protein HIV Nef is a viral protein expressed early in viral infection, that is, myristoylated at the N-terminus, with a relative molecular mass of 25-32 kDa, localized to the cytoplasm and inner face of the cell membranes [92][93][94][95] . HIV Nef has no enzymatic activity and requires myristylation for most of its known functions, which include restriction of immune surveillance, intracellular protein sorting, and phosphorylation of proteins. It is in the lipid rafts that Nef interacts with several host cellular proteins as an adaptor molecule. Several functional domains of Nef have been shown to be important for these effects, including a proline-rich (PxxP) 3 domain, thought to interact with SH3 domains in protein partners, and 6-sequence domains that interact with the endocytotic cellular machinery, including a dileucine moti [96][97][98] . These domains are important for Nef-mediated downregulation of CD4 and blocking major histocompatibility antigen I (MHC I) trafficking to the membrane, allowing the infected cell to evade immune surveillance. Nef-mediated increases in macrophage inflammatory protein-1, IL-1 β, IL-6, and TNF-α in monocyte-derived macrophages (MDM) required the domains critical for the interaction with the endocytotic machinery because mutants (i.e., EE155-156QQ, and DD174-175AA) were ineffective 99 . Interestingly, Nef also induces de-differentiation of kidney podocytes 100 , possibly through cyclin-dependent mechanisms 101,102 , suggesting that Nef can influence cell proliferation and survival in a CD4-independent fashion. Importantly, mutations in the Nef PxxP motif diminish Nef signaling and the phenotypic changes in podocytes (Figure 1) 101,103 . Nef is a vascular insult even in the absence of infection. It is well-established that endothelial cells (ECs) are resilient to HIV infection, however, the presence of cell-free Nef has been documented in vascular and perivascular cells. For instance, Nef induces apoptosis in brain endothelial cells when expressed intracellularly or exogenously 104 . Expression of HIV Nef increases ERK, MEK, and Elk MAP kinase activity, thereby affecting T cell activity, viral replication, and infectivity 105,106 . The long-standing dogma is that severe PAH initiates with endothelial cell apoptosis followed by dysregulated endothelial cell proliferation. In the absence of infection, Nef enters lymphocytes via the human chemokine receptor CXCR4 and exerts apoptotic effects 107 . Nef also impairs vasomotor functions in pulmonary artery cells, decreases the expression of endothelial nitric oxide synthase and increases oxidative stress 108 . Moreover, HIV-infected lymphocytes or macrophages in the neighborhood of the pulmonary vasculature may release mediators, including viral proteins and cytokines that cause EC phenotypic alterations and uncontrolled proliferation. Nef can be detected found in cell culture supernatants (cell-free) when expressed in cultured cells or recombinant systems. In vivo, Nef circulates at 5-10 ng/mL plasma in HIV-infected patients 109 . Alternatively, Nef-infected cells secrete Nef-containing exosomes that, in turn, induce cytopathic effects. The awareness regarding the transfer of viral components (infectious particles and/or proteins) between HIV+ cells and bystander uninfected cells, leading to pathogenic outcomes like oxidative stress, cancer, neurocognitive impairment, and immunological dysfunction in the uninfected cells is gaining momentum in the field [110][111][112][113][114] . While the formation of conduits between T-cells may present a novel route for HIV transmission, the transfer of HIV proteins like Nef to B-cells is enough to impair adaptive immune processes such as antibody class switching 115 . Nef is one of the three immediate-early HIV genes which are still transcribed in HIV-infected cells, even in those receiving ART 116 . If ART reduces HIV virion production, but not Nef gene expression, as previously described 116 , then the persistence of Nef expression might contribute to the higher risk of Nef-induced pathologies in patients receiving suppressive ART 117 . To this end, the Clauss group successfully found and reported significant levels of Nef-positive peripheral blood mononuclear cells (PBMCs) in ART-treated patients with HIV RNA viral loads < 50 copies/ml, compared to untreated controls 118 . Because such a finding could be explained by the transfer of Nef from infected cells located in lymphatic tissues, further studies tested whether Nef from lymphatic or blood-derived mononuclear cells could also transfer to venous endothelial cells. Endothelial cells, especially in developing atherosclerotic plaques, are in direct contact with circulating HIV-infected cells and in a prime position for Nef transfer. Using a co-culture approach with human umbilical cord vein endothelial cells (HUVEC) with PBMCs from viremic untreated HIV-infected patients for 24 h, the presence of Nef-positive endothelial cells with various levels of Nef positivity suggested that different levels of Nef transfer had occurred 118 . Together, those studies suggested that Nef protein may be widely transferred from HIV-infected cells to uninfected blood cells and bystander tissue cells, thus providing a means of pathogenic Nef activity, even when virus replication is controlled. Sehgal and co-workers offered a sub-cellular explanation by establishing that Nef-induced disruption in protein trafficking pathways in endothelial cells and smooth muscle cells at the level of the trans-Golgi network is a pathogenetic mechanism in HIV-PAH 119 . Electron microscopy of plexiform pulmonary lesions characteristic of PAH, endothelial cells, fibroblasts, and smooth muscle cells in the lesions, showed enlarged endoplasmic reticulum, Golgi stacks, vacuolation and exocytic Weibel-Palade vesicles 120 , which suggested defects in intracellular trafficking. Furthermore, they analyzed the structure of the Golgi apparatus in lung tissue sections of pulmonary vascular lesions in idiopathic PAH (IPAH) and in macaques infected with a chimeric simian immunodeficiency virus containing the human immunodeficiency virus (HIV)-nef gene (SHIV-nef ) with histological evidence of proliferative, obliterative, and/or plexiform arterial lesions, using subcellular three-dimensional (3D) immunoimaging. They noted aberrant Golgi fragmentation in cells containing HIV-nef-bearing endosomes and increased cytoplasmic distribution or dispersal of the Golgi tethers giantin and p115 in obliterative-plexiform lesions in patients with idiopathic PAH, and in macaques infected with SHIV''-nef 120 . Strikingly, the vascular cells that displayed increased dispersal in Golgi tethers were the Nef positive cells. Another noteworthy finding was that the Golgi histopathology in the SHIVnef macaque lungs was indistinguishable to that found in idiopathic PAH. These studies from Sehgal and colleagues investigating the Golgi apparatus ignited discussions about potential relationships between disrupted endothelial cell membrane trafficking and mechanisms of PAH. While PAH is highly prevalent in HIV patients, not every patient with HIV develops PAH. The characterization of viral, genetic, or environmental (lifestyle) that contributes to PAH is (and shall continue to be) the subject of intense investigation. The Flores group in Colorado, USA hypothesized that certain sequence motifs would be more prevalent in alleles from HIV-infected patients with PAH than in HIV-infected normotensives. Patient cohorts from France (Drs. Marc Humbert, Gerald Simmoneau, and Cecile Goujard), Italy (Dr. Nicola Petrosillo), and California, USA (Drs. Priscilla Hsue and Laurence Huang) were investigated. Briefly, HIV nef gene was sequenced from frozen PBMC DNA, plasma and bronchoalveolar lavage fluid from these patients. Results showed that Nef polymorphisms found in human Nef were similar to those found in SHIVnef-infected monkeys. Furthermore, bioinformatic analyses of the Nef alleles present in the French and California patient cohorts uncovered Nef polymorphisms in HIV-PAH that were statistically significantly different from the normotensive individuals in each group 121 . Specifically, 10 Nef polymorphisms were reported in the French PAH group: the PxxP motif (proline-rich area essential for Nef interaction with SH3 domain-containing proteins 122 , where x is any amino acid), the L 58 V CD4 down-regulation domain 94 , the E 63 G acidic cluster mediating the sequestration of MHC-1 in the trans-Golgi network 123 , and the M 79 I/T 80 N/Y 81 F phosphorylation site for protein kinase C 124 . Additionally, changes were reported near M 20 -whose functional role is to interact with the adaptor protein 1 (AP-1) and efficiently prevent MHC-1 trafficking to the membrane 125 . Importantly, seven of these ten nef polymorphisms were validated in the San Francisco group with HIV-PAH 121 . Using ≥2 polymorphisms in Nef functional domains as a cutoff, whether the length of HIV infection, age, or ART would have an impact on the selection of Nef variants was also investigated. There was no correlation between the number of polymorphisms and the length of HIV infection or age. In addition, ART is not associated with the presence of > 2 Nef variants in subjects with PAH from Europe or San Francisco. Although the Nef variants identified tended to cluster together: L 58 V-Y 81 F, PxxP-A 53 P, PxxP-H 40 and A 53 P-H 40 Y in 4 subjects and Y 81 F-PxxP in 5 subjects, contingency analyses showed that there was no particular mutation associated with the presence of ART 121 . Role of HIV Tat protein HIV-1 Tat (Transactivator of transcription) is a basic non-structural protein that recruits positive transcription elongation factor b (P-TEFb) of the host to a transactivation response element (TAR) in the RNA stem-loop, thereby enhancing the transcription of HIV-1. Tat is made up of five distinct domains: cysteine-rich, core, N-terminal, fundamental, and C-terminal. In addition to its intracellular function of enhancing transcription of the viral promoter, approximately two-thirds of the Tat is secreted by the infected cells and acts extracellularly by binding to a number of neighboring and distant cells. Tat export has been reported to be dependent on the binding of its basic domain with phosphatidylinositol-4,5-bisphosphate (PtdIns(4,5)P 2 ) at the plasma membrane of T cells 126 . Extracellular HIV-Tat can bind integrins and vascular endothelial growth factor receptors via the c-terminal domain, containing an Arg-Gly-Asp (RGD) sequence and/or a basic domain 127,128 . Tat, on binding to these receptors, triggers signaling pathways that affect diverse processes, culminating in the pathogenesis of several HIV-associated co-morbidities -ranging from pulmonary hypertension to cognitive abnormalities 129,130 . Previous studies suggest that Tat acts as a proto-cytokine, which modulates the key functions of various cell types, including endothelial cells (ECs) as well as smooth muscle cells (SMC) 76 . HIV-Tat displays a dual function with respect to cell survival and cell death based on the micro-environment 131 . Furthermore, Tat has been reported to induce EC senescence through up-regulation of miRNA 34a and miRNA 217 and inhibition of SIRT1 expression 132 . Being an angiogenic factor is a key feature of Tat protein, and by enhancing the endothelial differentiation and tumor angiogenesis, it plays a vital role in HIV-associated Kaposi sarcoma. Tat specifically attaches to VEGF-A tyrosine kinase receptor (Flk-1/kinase inert domain receptor (KDR)) and activates a downstream signaling cascade to enhance angiogenesis 128 by inducing basic fibroblast growth factor and integrin pathways. HIV Tat, via its interactions with α v β 3 integrin, stimulates focal adhesion kinase and NF-κB activation that leads to endothelial cell proliferation [133][134][135][136] . Furthermore, activation of Ras/Rac1/ERK pathway is also implicated in the Tat-mediated proliferation and survival of endothelial cells 137 . This was later reported to be NADPH oxidase 4 (NOX4) dependent. However, Tat-mediated NOX2 dependent Rac1/JNK activation has been observed to regulate actin cytoskeletal dynamics of endothelial cells (Figure 2) 138 . Tat-induced reactive oxygen species (ROS) are also known to promote HIF-1 α accumulation in pulmonary endothelial cells, leading to enhanced expression of a smooth muscle mitogen, platelet-derived growth factor (PDGF) 139 . In addition, Tat augments pro-oxidative conditions by reducing glutathione levels 140 and weakening the expression of Mn-superoxide dismutase (Mn-SOD), a mitochondrial superoxide scavenger 141 . Contrary to its role in promoting cell survival, Tat also is known to augment apoptosis of primary microvascular endothelial cells via triggering either TNF-α or Fas-dependent pathways 142 . Induction of apoptosis by Tat is not only mediated by activation of extrinsic pathways through apoptotic ligands, but also by activation of intrinsic pathways through direct entry of Tat 143 . This is supported by findings showing Tat-mediated phosphorylation of Erk 1/2, caspase-3 activation, and apoptosis of coronary artery endothelial cells 144 . In addition, numerous studies have implicated ROS-mediated activation of ERK1/2/MAPK signaling in altering the composition of tight junction proteins, thereby increasing endothelial permeability 145 . Our group reported ROSdependent induction of both autophagy and apoptosis during early treatments of pulmonary ECs with Tat. However, chronic exposure to Tat in the presence of opioids prevented the accumulation of cytotoxic levels of ROS due to chronic activation of autophagy, thereby aiding cells to adapt to a chronic stress 146 . HIV-Tat is also involved in the release of pro-inflammatory cytokines including MCP-1, IL-1 β 147 and increases in the expression of cell adhesion molecules, such as E-selectin, ICAM-1 and VCAM-1 in ECs 148,149 . It is known to stimulate the ICAM-1 expression by suppressing miR-221/-222 via an NF-kB-dependent pathway 150 , while stimulation of VCAM-1 expression involves induction of p38 MAP kinase and NF-kB signaling 151 . Upregulation of these molecules results in the adhesion of monocytes and T-cells to the endothelium facilitating vascular injury [152][153][154][155] . Multiple signaling pathways, such as activation of PDGF signaling, have been shown to enhance smooth muscle proliferation and progression to PAH 156 . Tat is reported to enhance the proliferation of pulmonary arterial smooth muscle cells via activation of PDGF-β receptors 157 . It further synergizes with cocaine in promoting smooth muscle hyperplasia through ligand-independent phosphorylation of PDGF β receptor at Y934 residue 157 . Our group demonstrated that Tat-mediated down-regulation in the expression of anti-proliferative bone morphogenetic protein receptor (BMPR) in pulmonary arterial smooth muscle cells further worsened in the presence of cocaine 158 . This was reported to be mediated by miR-216a-and miR-301a-dependent translational repression 159 . Given that reduced expression or function of BMPR-2 signaling is known to exaggerate the proliferative TGF-β signaling, a parallel increase in the TGF βR1 and TGF βR2 expression and activation of SMAD-dependent and SMAD-independent downstream signaling cascades were observed in response to the combined treatment of cocaine and Tat. Role of HIV gp120 protein The Envelope glycoprotein (Env) is a viral protein present on the surface of HIV virions. Extensive research efforts have investigated the use of HIV-Env as a tool to fingerprint the virus, as well as for the development of anti-HIV vaccines. The HIV envelope glycoprotein-120 (gp120) is essential for viral attachment and fusion through the host cellular membrane. HIV enters the cells via interactions with CD4 receptors in the host cell and C-C chemokine receptor-5 (CCR5) and C-X-C chemokine receptor-4 (CXCR4). The CCR5 is a receptor for RANTES/CCL5, MIP-α/CCL3, and MIP-β/CCL4 in primary macrophages 160 . The CCR5 receptor is expressed in microglia, T lymphocytes, macrophages, and dendritic cells (DC). On the other hand, CXCR4 is a 7-transmembrane G protein-coupled receptor used by HIV as a co-receptor for preferential entry to T cell lines 161 . Its natural ligand is stromal-derived factor-1 (SDF-1/CXCL12) 162 . Conventionally, HIV virions that use CCR5 as a portal of entry are designated as ''R5'', while virions using CXCR4 are referred to as ''X4''. The HIV preference for CCR5 co-receptor switches to a preference for CXCR4 over the course of HIV infection; this co-receptor switch predicts progression to AIDS in ∼50% of HIV+ individuals 163 . The value of HIV gp120 as a molecular fingerprint has shed light on the lung as a potential reservoir for HIV. For instance, sequence analyses of the V3 loop of gp120 and pro-viral DNA copy numbers revealed that temporal evolution of gp120 164 and that gp120 in bronchoalveolar lavage fluid and alveolar macrophages evolve separately from those in the peripheral blood 165,166 . Early work from Kanmogne established that these cells do not express HIV coreceptors and discounted them as potential reservoirs for HIV 167 . However, accumulating evidence shows gp120-associated effects on vascular and pulmonary resident immune cells. HIV gp120 is a long-standing activator of lymphocytes 168 , and acts as viral superantigen by triggering the release of cytokines critical for T(H)2 polarization from human Fc RI+ cells 169 . HIV gp120 also facilities pathogen co-colonization in the host, as it inhibits fungistatic activity of bronchoalveolar macrophages against Cryptococcus neoformans 170 and Pneumococci 171 . It also promotes Mycobacterium avium growth in alveolar macrophages by enhancing prostaglandin E2 release 172 , as well as Mycobacterium tuberculosis replication 173 . Besides facilitating the viral attachment into the host cell, gp120 exerts several cellular effects. It induces the release of IL-1 β from macrophages in a time-and concentrationdependent manner; 174 side effects also include apoptosis and increases the secretion of endothelin-1 from human monocytes 175 , astrocyte-brain microvascular endothelial cell co-cultures 176 , and primary lung endothelial cells 177 . Interestingly, the apoptosis induction of gp120 in endothelial cells is mediated by the upregulation of EMAP II and its receptor CXCR4 on the surface of lung microvascular endothelial cells (Figure 3) 178 . In addition, excessive mucus formation, which is a feature of chronic bronchitis, COPD, and asthma, is also induced and regulated by HIV-gp120 in the airway epithelial cells through the CXCR4-α7-nAChR-GABAAR pathway 179 . Furthermore, the use of cocaine also enhances gp120-induced pulmonary endothelial cell dysfunction 180 . HIV gp120 effects are also complicated by aging. Studies using mouse primary lung fibroblasts exposed to HIV gp120 showed the induction of α-SMA expression and fibroblast-to-myofibroblast trans-differentiation by activating the CXCR4-ERK1/2 signaling pathways 181 , which may complicate fibrotic changes associated with aging. UNDERSTANDING HIV-PAH THROUGH THE LENS OF ANIMAL MODELS Animal models provide an enormous advantage in the screening of drugs and understanding disease mechanisms. The ideal pre-clinical PAH model should exhibit characteristics of human pulmonary hypertension, such as elevated right ventricular pressure and enhanced pulmonary vascular remodeling and right ventricular hypertrophy. Recapitulating HIV-PAH in non-human primates In the wild, some non-human primate species are susceptible to infection with the simian homolog of HIV, the simian immunodeficiency virus (SIV). These primates, when infected with SIV, do not progress to simian AIDS, but instead host the virus as a non-pathogenic parasite. These ''natural hosts'' for SIV include sooty mangabeys (Cercocebus atys), African green monkeys (Chlorocebus sabaeus), mandrills (Mandrillus sphinx ), sun-tailed monkeys (Cercopithecus solatus), and chimpanzees (Pan troglodytes). The infection profile in the natural SIV hosts shows extremely high levels of SIV replication with no immunodeficiency (unchanged CD4+ T cell counts) and very low levels of cellular activation. The understanding on how the immune system of these primates resists being a target for HIV is the subject of intense research endeavors 182,183 . Groundbreaking work from Chesney & Allen in 1973 reported that stumptail monkeys (Macaca arctoides), intoxicated with monocrotaline, presented cor pulmonale, with pulmonary vascular lesions that included fibrin and platelet thrombi within capillary lumina, hypertrophied arteriolar endothelium, as well as medial hypertrophy 188 . The unique ability to track disease development in non-human primates created the jumping board to further generations of animal models of PAH. Rhesus macaques infected with SIV display systemic arteriopathy, suffered by at least 22% of infected macaques dying with simian AIDS. Chalifoux and colleagues described sclerotic lesions in large and medium-sized pulmonary arteries, with intimal thickening by smooth muscle cells and collagen 189 . The presence of these lesions was not associated with survival time, age, or sex; moreover, they were restricted to the pulmonary parenchyma in 37% of the animals. Systemic vasculopathy was evident by the presence of similar lesions in the kidney, heart, lymph node, intestinal tract, pancreas, and meninges, albeit less severe. Generalized arteriopathy was also documented in SIV-infected macaques in Yanai et al. studies 190 , in which lesions were characterized by intimal thickening and fibrosis with various degrees of vasculitis in macaques infected with SIV. SIV and HIV sequences are not identical; therefore, the derivation of conclusions from SIV-based studies on human diseases must proceed with caution. The use of chimeric approaches has maximized the use of these animal subjects in HIV research, helping scientists to understand HIV gene evolution, evaluation of anti-HIV vaccines and the discovery of previously undescribed roles for HIV genes. Essentially, the chimeric approach consists of recombining viruses by substituting the gene of interest of an infectious SIV molecular clone with its equivalent in HIV. The resulting chimeric virions are designated SHIV 191 . The use of these animal models has been especially compelling to research in long-term complications of HIV infection, including pulmonary vascular remodeling and pulmonary hypertension. One of the chimeric models described in the literature is the SHIVnef macaque model. The Desrosiers and Luciw groups documented that the S(HIV)nef macaque model is susceptible to AIDS 192,193 . This model was initially generated by infecting rhesus macaques with a molecular clone that resulted from the replacement of the SIV-nef gene (strain SIV mac239, GenBank # M33262) with the HIV-nef from a cloned virus isolated from an HIV-infected patient (SF-33, GenBank # AY352275) to create a recombinant SHIVnef (GenBank # AF490445) 192 . Intravenous inoculation of the macaques resulted in the animals dying of simian AIDS. Genetic characterization of the nef sequences in the animals showed Nef evolution, with at least four consistent amino acid changes acquired during passage in vivo 192 , of which two were later confirmed in a well-established cohort of individuals living with HIV and PAH 121 . Retrospective histological analyses of the lung tissue in the same macaque cohort revealed the presence of luminal obliteration, intimal disruption, medial hypertrophy, thrombosis, and recanalized lumina, which are features beyond systemic arteriopathy 194,195 . Noteworthy, the cells within plexiform lesions were factor VIII+ and smooth muscle actin+. Moreover, monkeys infected with the parental SIV mac239 lacked pulmonary lesions. This finding was the first in the literature to suggest an association between HIV Nef and plexiform lesions in macaques. One consequence was the potential use of primates as models for HIV-associated pulmonary vascular pathology, as the plexiform lesion in monkeys were histologically indistinguishable from those in severe human PAH. The central role of Nef was established with histological Nef colocalization to endothelial cells in macaques with severe remodeling and in patients with HIVassociated PAH 194,195 . The leverage of S(HIV)nef-infected macaque lung samples also allowed investigation of the mechanisms at the sub-cellular level. Studies from the Sehgal group uncovered dramatic cytoplasmic dispersal of giantin and p115, which are indicative of significant Golgi disruption 120 . The same study corroborated findings in retrospective samples from patients with idiopathic PAH. Further studies by the same group showed that Golgi fragmentation in macaque endothelial and smooth muscle cells in pulmonary lesions was exclusive to cells containing Nef endosomes 119 . Besides the formation of pulmonary vascular plexiform lesions and selective pressures on nef, SHIV-nef-infected primates also recapitulate features of idiopathic, familial, scleroderma-associated, and HIV-PAH like hematopathologies, changes in cardiac biomarkers indicative of ventricular hypertrophy, increased levels of interleukin-12 and GM-CSF, as well as decreased sCD40 L, CCL-2, and CXCL-1 in plasma 196 . Altogether, these studies establish the utility of SHIV-nef macaques as models of HIV-PAH. Non-human primates have also been instrumental in researching the additive effects of drugs of abuse on HIV-mediated pulmonary vascular injury. For instance, rhesus macaques infected with neurovirulent SIV strain macR71/17E and treated with morphine for up to 59 weeks displayed severe pulmonary arteriopathy consistent with severe angioproliferative PAH in humans. The authors concluded that morphine potentiates the effects of HIV/SIV in advancing pulmonary arteriopathy 84 . The Norris group performed echocardiography, and computed tomography (CT) scans, and right heart catheterizations in cynomolgus macaques infected with SHIV 89.6P -env and rhesus macaques infected with SIV B670. Their results showed that all the infected animals (with either SIV or SHIVenv) had increased right ventricular and pulmonary arterial pressures, with no evidence of systemic hemodynamic alterations 197 . Longitudinal hemodynamic studies showed the development of either progressive or transient PAH, as well as increased pulmonary vascular collagen deposition in PAH animals 198 . In addition, there is a case report of two pigtailed macaques who died suddenly while chronically infected with R5-tropic SHIV strain. The authors reported total occlusion of the pulmonary artery by massive fibrin thrombus in both cases, as well as pulmonary vascular lesions similar to human PAH 199 . A layer of complication is the use of viruses with different genetic backgrounds in different macaque species. While using animal models has significantly improved our understanding of the host-pathogen interactions in end-organ diseases beyond in vitro work, another limitation to the use of chimeric viruses is that it allows for the study of individual HIV genes/proteins in monkeys. Whether the response of the macaque as a host to a pre-selected virus, and vice-versa, is directly extrapolatable to human HIV isolates remains unaddressed. While their use in HIV-mediated pulmonary vascular diseases is appealing, macaque research is becoming difficult to justify. Nonetheless, non-human primates have revolutionized the field of PAH research because of their capability to allow measurements and interventions that are difficult or impossible to perform in humans. HIV-PAH in transgenic rats Sugen-hypoxia and monocrotaline-induced inflammation-based rodent models are the most widely used models to understand the pathogenesis of PAH 200 . Attempts have been made to create an HIV-PAH model using non-infectious NL4-3 gag/pol HIV-1 transgenic (Tg) rats. Deletion of gag and pol regions of the viral genome allows this model to express seven out of nine viral proteins i.e., env, tat, nef, rev, vif, vpr and vpu 201 . These HIV Tg rats exhibit most of the clinical manifestations observed in HIV-infected humans, including cardiac pathologies such as myocarditis and cardiomyopathy, and renal abnormalities such as proteinuria and nephrotic syndrome. Neurological changes, such as degeneration of peripheral nerves, cataracts and respiratory depression are also observed in these rats 201 . Lund et al. demonstrated the development of pulmonary vascular remodeling and PAH in HIV-Tg rats 202 . Rats showed increased proliferation of pulmonary arterial SMC, increased PA medial thickness, RV hypertrophy and increased RVSP. Age plays a significant role in the heightened endpoints of PAH in the HIV-Tg rat model, as a significant difference was observed in PAH pathology between four and nine-monthold rats 202 . This study did not find plexiform lesions in HIV Tg rats and HIV proteins may prime the lung vascular endothelial cells for a second hit, which may lead to their proliferation. The Sugen-Hypoxia model -and several other studies -suggest that a single assault, either in the form of inflammation, hypoxia, or genetic manipulation, is not sufficient to elicit all characteristics of clinical PAH in an animal model. Therefore, the quest to develop a severe PAH rat model shifted towards introducing multiple environmental or genetic hits. This was tried by Sutliff's group by using hypoxia (10% O 2 ) for four weeks as the second hit in HIV-Tg rats and demonstrated increased PVR, elevated RVSP with a 65% increase in vascular muscularization in HIV-Tg rats compared to WT 79 . However, contrary to the findings by Lund et al. that were performed at Lovelace respiratory center in Albuquerque, Sutliff's group did not find an increase in RVSP and mPAP in HIV-Tg rats in the absence of hypoxia. It could be that the location of the Lovelace center at a higher altitude resulted in the hypoxic challenge, thereby contributing to potentiating the disease in these rats. In addition to hypoxia, drugs of abuse like morphine/opioids, cocaine and methamphetamine have been tested as a highly potential second-hit to viral proteins in triggering vascular injury and development of HIV-PAH 10,203 . Cocaine treatment of HIV-Tg rats was reported to augment pulmonary vascular remodeling and increase mPAP and RVSP, when compared with HIV-Tg rats or wild-type rats not treated with cocaine 204 . Hyper proliferative SMCs isolated from HIV -Tg rats treated with cocaine had decreased expression of anti-proliferative BMPR-2 204 , with a corresponding increase in the levels of phosphorylated TGF βR1 and TGF βR2 205 . In context, with the undetectable viral load in PLWH, it is quite likely that circulating viral proteins and chronic inflammation -and not the active viral infection -makes the pulmonary vasculature susceptible to remodeling on the second hit of environmental stressors, as demonstrated in the model of this non-infectious HIV-transgenic rats. Pulmonary vascular disease in HIV-transgenic mice In contrast to the robust Sugen-hypoxia rat model of irreversible PAH with a single dose of SU-5416 followed by hypoxia treatment, mice dosed three times with SU-5416 and exposed to 10% oxygen for three weeks, were able to exhibit only moderately elevated right ventricular systolic pressures and neo-intimal thickening of pulmonary vessels. In addition, a major disadvantage of the mouse model of Sugen-hypoxia is its reversibility, i.e., changing animal from hypoxic to normoxic condition reverses the PAH pathology 206 . Furthermore, developing an inflammation-based mouse model using monocrotaline is also one of the biggest challenges. Mouse strains lack CYP3A, a key enzyme required to metabolize the monocrotaline to its active moiety de-hydromonocrotaline 207,208 , and therefore fail to generate elevated pulmonary pressures. Nevertheless, deletion or overexpression of some of the molecular proteins and targets playing a key role in the development of PAH has been tried in mice, to confirm and validate their role in PAH pathogenesis. Over expression of IL-6 209 , 5-Lipoxygenase 210 and 5HTT transporter (5HTT) 211 or knock out/down of BMPR-II 212 , adenosine receptor 213 , prostacyclin synthase 214 and vasoactive intestinal peptide (VIP) 215 demonstrated few characteristics of PAH, but failed to generate full-blown or advanced disease of PAH. Likewise, HIV-transgenic (Tg-26) mice also expressing seven out of nine HIV proteins similar to HIV-Tg rats, failed to generate robust key pathological features of PAH 216 . Although these Tg-26 mice had normal PA pressure, similar to wild-type mice, they showed modest endothelial dysfunction and pulmonary vascular remodeling 217,218 . Therefore, even though mice provide an option of genetic manipulations, they are more resistant to PAH in comparison to rats and another hit of stressors like hypoxia or illicit drugs may potentiate the vascular injury translating to right ventricle failure. Humanized mice as a new frontier in HIV-PAH research While non-human primates closely mimic infectious and pulmonary vascular disease, smaller animals, like rodents, advance the field by bringing tractability and costeffectiveness to the bench of the researcher. Infection and its associated immunological storms are, however, key ingredients in the recipe for HIV-PAH. Moreover, HIV is a humantropic virus that only infects human cells. The humanization of the mouse immune system has significantly accelerated knowledge of HIV immunopathogenesis 219,220 . In essence, the hu-BLT mouse model is the best-described small animal model in the field of HIV pathogenesis. Humanized BLT mice 221 reconstitute the human immune system after transplantation with CD34+ hematopoietic stem cells in the bone marrow and implantation of human fetal liver and thymus tissue under the kidney capsule. The reconstitution results in the production of systemically disseminated human B cells, monocyte/macrophages, dendritic cells, and T lymphocytes. Human innate and adaptive immune responses are detectable in the peripheral blood of BLT mice 4-8 weeks post-engraftment 222 . This mouse model supports productive HIV infection for up to 17 weeks, as shown by plasma viral load analyses and viral DNA in the spleen, bone marrow, lymph nodes, thymus, liver, and lung after intravenous, rectal, or vaginal infection [223][224][225][226][227][228] . In addition, infected BLT mice exhibit HIV latency when treated with antiretroviral therapy 229 . It has also demonstrated that CXCR4utilizing HIV-1 LAI replicates at high levels and depletes CD4+ T cells in blood and tissues in humanized BLT mice 230 . Several small animal models have been successfully implemented over the years for research in severe pulmonary diseases, like the monocrotaline and the Sugen/hypoxia models. Nevertheless, they are not susceptible to HIV infection, which limits mechanistic studies in HIV-associated PAH within the framework of the inflammation invoked by infection. While mice with humanized systems are now established models for HIV research, its validity for pulmonary vascular studies warrant exploration. The combination of hypoxia and Sugen generated one of the most well-established models of PAH today 231,232 . The presence of HIV proteins alone tends to decrease VEGF expression in SIV-infected macaques with pulmonary arteriopathy 84 -and experimental models of PAH may employ combinations of known pulmonary vascular insults like [HIV proteins + morphine] 84 or [Sugen + morphine] 233 , all of which recreate pro-angiogenic signaling as hallmarks of PAH. Recent studies combined HIV with either hypoxia (10% oxygen) or Sugen to test the suitability of humanized mice for HIV-PAH studies. The results indicate that neither hypoxia nor HIV alone induced severe PAH in the mice, and that the combination of HIV and hypoxia promoted PAH with a 25% survival rate, and that the combination of HIV and Sugen also promoted a PAH phenotype in the mice, with significantly higher survival (100%) (unpublished data). Histopathological examination of lung tissues in humanized mice with HIV-PAH showed significant inflammation at interstitial, airways, veins, and artery levels, with foamy cells. Quantitative analyses of the pulmonary artery thickness (adjusted by diameter) showed significant muscularization of the arteries. The authors concluded that HIV-Sugen combinations lead to a better model suited for PAH and pulmonary vascular research due to its significantly higher survival and pulmonary hemodynamics (unpublished data). Humanized mice have humanized hematopoietic cells, but still have murine vascular cells. These studies create a critical and timely foundation for mechanistic studies of PAH in small animals, which demonstrate the versatility and high translatability of this animal model. It is ideal for HIV research efforts with an emphasis on the heart and lungs. Additionally, humanized mice co-infected with HIV and Mycobacterium tuberculosis (Mtb) led to descriptions of HIV+ cells within Mtb lesions in the lung and that HIV co-infection increases the proliferation rate of Mtb because the animal succumbs to CD4 depletion 228 . Most recent generations of humanized mice now allow for humanization of the immune system together with the implantation of human lung tissue. Such models are suitable for research with human pathogens like Zika virus, cytomegalovirus, Middle East respiratory syndrome coronavirus (MERS), and respiratory syncytial virus 234 . Molecular mechanisms underlying PAH pathogenesis are complex and have diverse etiologies; therefore, it is challenging to have a gold standard for the animal model which can mimic all the salient features of PAH observed in the clinical setup. More importantly, translating the data obtained from these models to clinical interventions should be used with caution and continuous efforts to develop a more robust model are needed. CLOSING REMARKS Longer survival of HIV patients due to modern HAART has refocused biomedical attention to combat the increase in secondary, non-infectious, complications including pulmonary hypertension. Complicating matters further, people living with HIV often have a history of drug abuse and exposure to co-infections; both increasing the capacity of HIV to induce pulmonary vascular injury leading to diseases such as PAH. In an attempt to identify targets for the intervention of HIV-induced comorbidities, several key molecular mechanisms of HIV-induced pulmonary vascular disease have been identified. These involve HIV proteins; Tat, Nef and gp120, which by deregulation of endothelial survival and pro-inflammatory activities play a significant role in HIV-induced PAH. Although non-human primate and small rodent models have been examined to understand the mechanisms and pathobiology related to HIV-PAH, each model has its own limitations and do not completely recapitulate the clinical features of the disease, justifying the need to develop new animal models. Having both molecular mechanism and complementary pre-clinical models in our hands, it should not take long to develop drugs targeting HIV-induced comorbidities, including PAH.
2021-07-21T05:20:51.337Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "0ef04bec394bfa29d12924755f4a186b5a470243", "oa_license": "CCBY", "oa_url": "https://globalcardiologyscienceandpractice.com/index.php/gcsp/article/download/492/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ef04bec394bfa29d12924755f4a186b5a470243", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
209961998
pes2o/s2orc
v3-fos-license
EFFECTIVENESS OF COMPREHENSIVE NURSING INTERVENTION ON KNOWLEDGE REGARDING SELF-CARE MANAGEMENT AND SELF-REPORT PRACTICES RELATED TO DYSMENORRHEA AMONG ADOLESCENT GIRLS: A QUASI EXPERIMENTAL STUDY Swati Parmar 1 and Viji Mol 2 . 1. M.Sc. Nursing Student, Akal College of Nursing, Eternal University, Baru Sahib, Distt. Sirmour Himachal Pradesh Pin-173101, India. 2. Assisstant Professor, Akal College of Nursing, Eternal University, Baru Sahib, Distt. Sirmour Himachal Pradesh Pin-173101, India. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 16 July 2019 Final Accepted: 18 August 2019 Published: September 2019 ISSN: 2320-5407 Int. J. Adv. Res. 7 (9), 1411-1418 1412 defined adolescents between the ages of 10-19 years. Adolescent girls constitute about one-fifth of the female"s population in the world. 2 National health policy (2002) has defined adolescents as an underserved vulnerable group that needs to be addressed especially by the provision of information on reproductive health. 3 One of the major physiological changes that take place in adolescent girls is the onset of menarche which is often associated with problems of irregular menstruation, excessive bleeding and dysmenorrhea. Problems can occur at any point in the menstrual cycle, which is influenced by anatomical abnormalities, physiological imbalances, and lifestyle. Knowledge regarding normal menstrual parameters is essential for the assessment of menstrual cycle experiences and disorders. Out of this dysmenorrhea is one of the common problems experienced by many adolescent girls. The term dysmenorrhea is derived from the Greek words "dys" meaning difficult/painful, "Meno" meaning month and "rrhea" meaning flow. 4 Dysmenorrhea is painful menstruation most common form of gynaecological dysfunction. 5 It may begin soon after the menarche, after which it often improves with age; or it may originate later in life, after the onset of an underlying causative condition. It is the most common gynaecological disorder in women of reproductive age. Dysmenorrhea is the medical term for pain with menstruation. There are two types of dysmenorrhea: primary and secondary. 6 The treatment aims mainly at the cause rather than the symptoms. The type of treatment depends on the age, severity, and the parity of the patient. 7 Need Of The Study The adolescent"s dysmenorrhoeal problems are common throughout the country. Adolescent girls are more vulnerable groups, particularly in developing countries where they are traditionally married at an early age and are exposed to a greater risk of acquiring reproductive diseases. 8 As a researcher we should put efforts into creating awareness about managing the dysmenorrhea symptoms. Cramps are part of the primary dysmenorrhea which is associated with back pain, headaches, nausea, vomiting, dizziness, and diarrhoea. These symptoms can appear a day before the actual flow or after the usual flow and they usually peak by the second day of flow. 9 Previous studies so far have emphasized mainly on the medicinal management of dysmenorrhea. But medicines always have unwanted side effects However, in most of the cases, the drugs are taken without prescription thus multiplying the risks of consuming a banned or even a wrong drug. Which ultimately leads to more complications than the original disease and may even be fatal. In terms of non-pharmacological treatments, there is a large body of studies conducted evaluating the effectiveness of no pharmacological interventions on dysmenorrhea. According to these studies, acupressure, acupuncture, specific exercise, use of dietary ginger, hot water bottle, and a few dietary modifications have proved to be effective in primary dysmenorrhea. Such a trail if proved successful will make adolescent girls confident and it will also help women empowerment. 10 Statement Of The Problem "Effectiveness of comprehensive nursing intervention on knowledge regarding self-care management and self-report practices related to dysmenorrhea among adolescent girls: A quasi experimental study" Objectives Of The Study 1. To assess the knowledge regarding self-care management and self-report practices related to dysmenorrhea among adolescent girls of experimental and control group. 2. To develop and administer comprehensive nursing intervention for improving the self-care management and self-report practices related to dysmenorrhea among adolescent girls of experimental group. 3. To evaluate the effectiveness of comprehensive nursing intervention on knowledge regarding self-care management and self-report practices related to dysmenorrhea among adolescent girls of experimental group and control group. 4. To find out the association between knowledge and practice of adolescent girls with the selected socio demographic variables of experimental and control group. Material And Method:- Research approach-Quantitative research approach 1413 Research Design-Quasi experimental (Non-randomised control group) Research setting- The study was conducted in two Government schools out of which 30 students were selected from Government Girls Senior Secondary School Nahan and 30 students were selected from Government high School Tokiyon. Schools were selected because of investigators" familiarity with the setting, availability of sample subjects and feasibility to conduct study. Population- Adolescent girls studying in 9 th Standard in the selected schools of distt. Sirmour. Sample- Adolescent girls studying in Government Senior Secondary School, Nahan and Government high School, Tokiyon Convenient sampling Inclusion criteria: Adolescent girls of age group 13-18 years who were present at the time of data collection and willing to participate were selected. Exclusion criteria: The study sample will not include girls who have not attained menarche. Description of tool: Self-administered questionnaire was used for the study which consisted three sections. Section 1 assess information about socio-demographic profile and menstrual profile of participants. Sociodemographic profile includes questions like age, residency, total income of parents, literacy status of mother, knowledge on menstruation, sources of knowledge, dietary habits. On the other hand, menstrual profile assess about menarche, menstrual cycle, bleeding length, no. of pads, sanitary material, feeling of pain, location, measures to control, absent from school, and leave due to pain. Section 2: Self-administered questionnaire to assess the Knowledge regarding self-care management. Section 3: Check list to assess the self-report practices related to dysmenorrhea Procedure for data collection Step 1: A formal permission was obtained from the Principal of the Akal College of Nursing and Principal of Government High school, Tokiyon. The participants (experimental group) were selected from class 9 th . On day 1, a written informed consent was taken from the study participants. The Researcher explained the purpose of the study. After that tool was administered to the study subjects. Participants were given 30minutes to fill the questionnaire. Comprehensive nursing intervention was administered to experimental group on day 2, which included brief introduction about reproductive tract, menstrual cycle and common menstrual problems and its management. On day 3 demonstration of exercises were given along with education regarding importance of sunlight, waste disposal method and personal hygiene. Return demonstration was taken on day 4 of experimental group. Step 2: On day 6 formal permission was obtained from the Principal of Government Girls Senior Secondary School, Nahan. Participants (control group) were selected from class 9 th . A written informed consent was taken from the study participants. The Researcher explained the purpose of the study. After that tool was administered to the study subjects. Participants were given 30minutes to fill the questionnaire. On day 8 post test of experimental group was conducted. On day 12 post test of control group was conducted. 1414 Step 3: After 2months Post-test was taken from both the groups (experimental and control) Data analysis: The data analysis was done according to the objective of the study. Both descriptive and inferential statistics was used. Descriptive statistics: Frequency, mean percentage and standard deviation Inferential analysis: Repeated ANOVA and Chi square was used. Findings Of The Study Findings related to socio-demographic variables-Almost 19(63.3%) and 14(46.7%) of adolescent girls in experimental and control group respectively were in the age group 15-16years. From experimental group 30(100%) adolescent girls were from urban area and 30(100%) adolescent girls were from rural area. Almost 16(53.3%) and 19(63.3%) of girl"s family type were nuclear in experimental and control group respectively. In experimental group and control group most of the adolescent girls were having total monthly income of parent"s ≤5000 with 22(73.3%) and 18(60%) respectively in both the groups. Whereas literacy status of mother was primary education in experimental group and control group with 16(53.3%) and 17(56.7%) respectively. Girls in experimental group and control group had 30(100%) and 30(100%) knowledge regarding menstruation respectively. As per source of information regarding menstruation 15(50%) were informed by their mothers and 15(50%) by teacher in experimental group whereas 15(50%) were informed by their mothers and 12(40%) by teachers in control group. In both the groups maximum of adolescent girls were vegetarian i.e 21(70%) in experimental group and 21(70%) in control group. The above explained table clearly shows the increase in the knowledge of adolescent girls after the administration of comprehensive nursing intervention. Thus H 1 has significant difference in the pre-test and post-tests knowledge scores of experimental group at p<0.05 level of significance is accepted. Association between knowledge and practice of adolescent girls with selected socio-demographic variables. Chi test was used and there was no association between pre-test knowledge and pre-test practice score of experimental group and control group with selected socio-demographic variables of adolescents at p<0.05 level of significance. Discussion:-Menstruation is a natural process and it is linked with several perceptions and practices. Women having better knowledge regarding menstrual hygiene and safe menstrual practices are less vulnerable to reproductive tract infections and other complications. 11 Out of this dysmenorrhea is one of the common problems experienced by many adolescent girls. 12 Therefore, increased knowledge about menstruation right from childhood may strengthen the practices and may help in reducing the suffering of millions of women. 13 Present study assessed the poor knowledge and practices regarding dysmenorrhea. These findings are in compliance with the findings of other studies. [14][15][16] 1417 Present study also assessed the effectiveness of comprehensive nursing intervention on knowledge and practices regarding dysmenorrhea. The level of knowledge improved but long term effect of intervention was not observed as, after two months the knowledge level decreased. While assessing the practices there was improvement after two months. These findings are in compliance with the finding of other studies. [17][18][19][20][21] Efforts should be made to create more awareness regarding dysmenorrhea by incorporating this sensitive area in school curriculum. In the present study there was no association between knowledge and practice of adolescent girls with selected socio-demographic variables. These findings are in compliance with the study done by Kavuluru P. to assess the effectiveness of ginger preparation on dysmenorrhea among adolescent girls concluded that there was no association of pre-interventional and post-interventional dysmenorrhea among adolescent girls with selected socio-demographic variables. 22 On the contrary a study done by Kanika showed significant association of knowledge on dysmenorrhea and its treatment with course of Nursing. 14 By choosing suitable intervention at appropriate time can decrease the symptoms. Although many alternative and complementary therapies are increasingly popular and used in developed countries like yoga, acupuncture, exercise, meditation, therapeutic touch all help in relieving menstrual discomfort by increasing vasodilation and subsequently decreasing ischemia which ultimately leads to the release of beta-endorphins, which suppresses prostaglandins. Nutrition also plays a very important role in decreasing the symptoms associated with dysmenorrhea. However, for an intervention that is so cheap and free of side effects little efforts should be taken to modify the life style so as to improve the adolescents health status for healthy contribution towards the nation. Conclusion:- Adolescent"s dysmenorrhoeal problem is common throughout the country. The result shows that there was an average increase in knowledge regarding self-care management and improvement in the self-report practices. Adolescent girls are more vulnerable group and they represent the future of country. As a researcher, we should put efforts in creating innovative strategies to educate and aware adolescents regarding management of dysmenorrhea.
2019-10-31T09:13:46.395Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "0b340256b8b6f7b14e24295b6a7496e16f87aad0", "oa_license": "CCBY", "oa_url": "http://www.journalijar.com/uploads/581_IJAR-29034.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7c3bd7fd2e8a0b2b7f097655d8a6622ebc77d094", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
234867473
pes2o/s2orc
v3-fos-license
Study of Body Mass Index among Medical Students of a Medical College in Nepal: A Descriptive Cross-sectional Study ABSTRACT Introduction: Changes in the lifestyle, food habits, lack of nutritious diet, stress, physical inactivity increases the body mass index among adults. Excess weight gain is an important risk factor for non-communicable diseases such as heart disease, stroke, diabetes, and some cancers (endometrial, breast, colon). Thus, this study aims to find out body mass index of medical students of a medical college in Nepal. Methods: This descriptive cross-sectional study was conducted in the department of physiology of a tertiary care center from August 2019 to February 2020 after taking ethical clearence from Institutional Review Committee (Reference number 192/19). Height and weight were recorded and body mass index was then being calculated. Data entry was done in Microsoft Excel and analyzed using Statistical Package for the Social Sciences version 22. Results: Out of 266 medical students, 39 (15%) were overweight and 32 (12%) were underweight with mean body mass index 26.60±1.99kg/m2 and 17.13±1.19kg/m2 respectively. Mean body mass index of males was 21.76±3.06kg/m2 and that of females was 21.70±2.96 kg/m2. Conclusions: Comparing with a similar study done in Nepal previously, we found a higher prevalence of overweight in medical students whereas majority of medical students had normal weight. Factors affecting body mass index in medical students should be explored further. INTRODUCTION Body mass index is an index of weight for height and commonly used to classify overweight and obesity. The prevalence of overweight and obesity is increasing in low-and middle-income countries.The most important causes of obesity are increased intake of foods rich in fat, salt, sugar and lack of exercise. 1 A linear relationship exists between body mass index and blood pressure. [2][3] Obesity enhances sympathetic activity however blood pressure is regulated by balancing action betweentwo branches of autonomic nervous system. 4 Cardiovascular disease in younger populations is increasing now a day due to striking shift in the lifestyle of adults. Sedentary life style as well as consumption of junk foods, andstress increases body mass index (BMI). 5 Higher BMI increases risk for development of hypertension. 6 The aim of this study was to find out the mean body mass index of medical students and measure their dietary habits. The required sample size was taken as 266. Therefore, we included 266 participants in the study using convenience sampling technique. First the subject was informed about the procedure and the consent was taken. Then the subject was asked to fill up the questionnaire that includes detailed history regarding the dietary habits and history suggestive of any cardio-respiratory or any other systemic illness. Based on the information obtained from the questionnaire, the participants were divided into two groups such as vegetarians and non-vegetarians. Those who consumed foods of plant and animal source including meat, fowl, eggs, and fish were considered as nonvegetarian and those who consumed food from only plant source and milk were classified as vegetarians. METHODS Height was measured without shoes, to the nearest 0.5cm with participant standing erect against the wall with heels together and touching the wall, and head held in upright position. Weight was measured with minimum cloths and no footwear on a standardized weighing machine marked from 0 to 130 kg and was recorded to the nearest 0.5 kg. Body mass index (BMI) was then calculated using the formula weight in kilograms divided by the square of the height in meters{weight (kg)/ height (m 2 )}. It was then summarized and categorized into three groups, underweight (BMI <18.5kg/m 2 ), normal weight (BMI 18.5 to 24.9kg/m 2 ), overweight (BMI 25.0-29.9kg/m 2 ) and obese (BMI 30 -34.9kg/m 2 ) in accordance with the WHO recommendation. 7 Data were entered into Microsoft Excel and analyzed using IBM Statistical Package for the Social Sciences version 22 software. The data were analyzed using descriptive statistics and have been presented as means, standard deviations, frequencies, and percentages. RESULTS Out of the total medical students in this study, the mean body mass index of all participants was 21.70±1.96. Of them, 130 (49%) males and 136 (51%) females, age ranging from 17-25 years were included in this study. Thirty-nine students (14.66%) were overweight with mean body mass index of 26.60±1.99kg/m 2 . One hundred ninety-five (73.30%) students had normal body mass index and some 32 (12.3%) students were underweight (Table 1). Height, weight and body mass index of male subjects were higher than female subjects (Table 2). (Table 3). DISCUSSION This was a cross-sectional descriptive study based on self structured questionnaire, where we assessed BMI, dietary habits, blood pressure and heart rate of medical students. The prevalence of obesity is increasing continuously worldwide, affecting all ages, sexes, races and becoming major risk factor for noncommunicable disease. 1 We found in our study that 14.66% medical students were overweight. Similar studies that calculated BMI of medical students have been conducted in India, Pakistan, Poland, United Arab Emirates and Greece. [8][9][10][11] Amatya et al. 12 in their study found that 18.31% medical students underweight, 77.18% normal, 4.23% overweight, and 0.3% obese (one male student). Our study showed that the mean height, weight and BMI of the male participants were found to be higher than their female counterparts. Similar results were found by previous study. 13,14 The difference in weight could be attributed to the difference in bone density, where the bones of males are denser than that of females. The authors found that there is positive association between BMI and height during prepubertal children. 13 In contrast to our study, researcher found that BMI is negatively related with height particularly in young women. 15 In the present study, body mass index were found to be higher in nonvegetarian than vegetarians. This is consistent with previous study conducted by several researchers. [16][17][18] Comparison of vegetarian and nonvegetarian diet shows that vegetarian diets are usually rich in carbohydrates, dietary fiber, carotenoids, folic acid, vitamin C, vitamin E, magnesium and relatively low in protein, saturated fat, retinol, vitamin B12, Zinc. 19 Higher intake of dietary fiber and lower intake of animal fat can reduce body mass index. 16 The extensive reviews of observational studies that used eating pattern methods suggest that fiber rich foods, such as vegetables, fruits, cereals, whole grains, and legumes, is inversely related to body mass index (BMI), overweight, and obesity. 18 We conducted this study in a limited sample size in a single institution. Our study design didn't permit the measurement of association between variables. Further studies done multiple institutes with a larger sample size must be carried out to find out the true prevalence and mean values of body mass indices of Nepalese medical students. CONCLUSIONS Comparing to findings of previous research done in Nepal, we found higher prevalence of overweight in medical students. Male had higher overweight than female. Among nonvegetarians body mass index was found to be higher. Regular physical exercise and balanced diet should be followed to prevent overweight and obesity.
2021-05-21T16:57:54.878Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4be13ed8cc545ef7114beab78e168bc9d1a96f6e", "oa_license": "CCBY", "oa_url": "http://www.jnma.com.np/jnma/index.php/jnma/article/download/6282/3428", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "061dc85e861dd51092641e4b3eda26b75a34a2c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261527150
pes2o/s2orc
v3-fos-license
Chemical/photochemical functionalization of polyethylene terephthalate fabric: effects on mechanical properties and bonding to nitrile rubber The aim of this work is to compare the effects of chemical and photochemical functionalization on the mechanical properties of PET fabric and its adhesion to nitrile rubber (NBR). The photochemical functionalization was performed by UV irradiation of PET fabric in the presence of glutaric acid peroxide at a temperature of 60 °C for different exposure times (i.e. 60, 90 and 120 min). The chemical functionalization (i.e. hydrolysis) of PET fabrics was performed by NaOH solution at a temperature of 60 °C for different times (i.e. 60, 120, 240 and 360 min). The tensile properties of the functionalized fibers were also evaluated. The functionalized PETs were evaluated for H-pull and T-peel adhesion to NBR. It was found that both treatment methods created functional groups on the PET surface. However, carboxylation of PET under GAP/UV irradiation generated much more OH groups on the PET surface (i.e. 4.5 times). The hydrolysis of PET in NaOH solution for more than 60 min caused a significant decrement in the tensile strength contrary to carboxylation under GAP/UV irradiation. It was also found that pullout and T-peel adhesions to NBR decreased in the case of hydrolysis of PET while they increased about 33 and 12% for GAP/UV irradiated PET, respectively. Fiber-reinforced rubber composites are used in a wide variety of applications such as oil, gas, automotive and aerospace industries [1][2][3] .Nitrile rubber (NBR) is a fuel resistant rubber that is used in production of oil and fuel resistant O-rings, packings, sealants, hoses and tanks [4][5][6][7] .Polyethylene terephthalate (PET) is one of the most important reinforcing fabrics that is used in NBR-based composites.Since PET has a neutral and inactive nature, it has poor adhesion to the NBR compound 8 . It has been found that the presence of the organic groups (i.e.like coupling agents) on the PET fiber surface improves its interfacial interactions with a rubber matrix but due to its relatively passive nature, functionalization should be performed before surface modification.Hydrolysis of PET has been traditionally used for this purpose.However, it has been shown that hydrolysis process influences the PET strength properties 28,29 . Among the various surface functionalization methods, UV-assisted treatment of PET fibers/fabrics seems a promising method according to its high performances.Liu et al. 30 studied the surface treatment of PET fiber with succinic acid peroxide under UV light irradiation.The functionalized PETs were then reacted to methylene diphenyl di-isocyanate (MDI).It was found that fiber-rubber adhesion increased up to twice due to functionalization.Razavizadeh et al. 31,32 functionalized PET fabric by exposing it to glutaric acid peroxide (GAP) under UV irradiation.Thereafter, the functionalized PET was grafted using MDI via a click reaction.T-peel adhesion test results showed that the surface functionalization caused more than 200% increment in PET to NBR bonding strength. Because of the wide range of industrial applications of PET-reinforced NBR composites, the study on the surface modification of PET fiber to improve its adhesion to NBR is an interesting topic from scientific and industrial points of view.He et al. 33 treated PET fabric with an alkaline solution and a coupling agent (i.e.KH550) with magnetic agitation.Then, it was used in the rubber matrix.The results showed that KH550 increased the adhesion of fibers to rubber by 33% compared to the sample without silane.Shao et al. 34 were able to attach silane (i.e.KH570) to the PET fabric surface using a simple atmospheric plasma surface treatment device.They used this modified PET fabric to enhance PET-silicone rubber adhesion.The results showed that the peeling strength of the treated sample is 7.44 times that of the untreated fabric sample.Andideh et al. 35 used bis(triethoxysilylpropyl) tetrasulfide (TESPT) with ethoxysilyl and tetrasulfide groups to strengthen short carbon fibers (CF) and styrene butadiene rubber (SBR).The results showed that the tear strength increased to 24.5% in the transverse direction of increasing the modified CF. Studies show that the use of silane coupling agents can be a good option to increase the adhesion of PET fabric to rubber. In this research, the effects of different treatment methods (i.e. chemical and photochemical) on the mechanical properties of PET and its adhesion to nitrile rubber were compared for the first time.For this purpose, PET fabric was hydrolyzed in NaOH solution at different times (i.e.60, 120, 240 and 360 min).It was also surface treated in Glutaric acid peroxide (GAP) under UV irradiation as competing method for different times (i.e.60, 90, and 120 min).The treating times were changed to optimize functionalization performances.The modified fabrics were characterized by ATR-FTIR, TGA, FE-SEM, EDS and XPS techniques.The effect of functionalization methods on the mechanical properties of PET fabrics was evaluated using the tensile test.Thereafter, the fabrics were silanized by vinyltrimethoxysilane (VTMS) to improve their bonding to NBR.Finally, bonding of PET fabrics to NBR was measured using H-pull and T-peel adhesion tests. Experimental Materials.Polyethylene terephthalate (PET) fabric prepared by HEJAB Co (Iran) was used in this research as a reinforcing textile for rubber matrix.The structural parameters of PET fabric are presented in Table 1.The fabrics were immersed in a 1 wt.% solution of non-ionic detergent in distilled water at 50 °C and then dried at 60 °C for 15 min to remove oil and pollutants. Acrylonitrile-butadiene rubber (NBR) with acrylonitrile content of 33% and a specific density of 1.31 (g/cm 3 ) (LG Company) was used as rubber in this study (see Table 2).Glutaric anhydride, hydrogen peroxide, acetone, ethanol, Sodium hydroxide (NaOH) and acetic acid were purchased from Merck Company.Vinyltrimethoxysilane (VTMS) (Evonik Company) was used as a silane coupling agent.Rubber ingredients were mixed by a laboratory two-roll mill and then mixed in a Bunbury according to ASTM D-3182 standard 36 .The prepared rubber sheets were conditioned at 25 °C for 24 h in a closed container before the determination of the optimum cure time at the temperature of 160 °C. Functionalization of PET fabrics.Carboxylation of PET by GAP/UV irradiation.Glutaric acid peroxide (GAP) was prepared by a reaction of glutaric anhydride with hydrogen peroxide in a water/ice bath at 0 °C32,37 .The PET fabric was immersed in a water/acetone solution containing 5 wt% of GAP and irradiated by a UV lamp (400W, ULTRAMED400, OSRAM Co., Germany) for 60, 90, and 120 min.The water and acetone solution was placed in an ice bath to prevent temperature rising and solvent evaporation.The carboxylated fabric was washed with deionized water and dried for 15 min at 100 °C.Silanization of PET fabric.Water and ethanol (solvent) were firstly combined at a volume ratio of 20:80.The pH of the mixture was adjusted to 3-4 using acetic acid and then VTMS (at 1 wt%) was added dropwise to the solution.The silane solution was mixed for 60 min to complete the hydrolysis process.The functionalized fiber/ fabric was immersed in the silane solution at ambient temperature for 60 min.Thereafter, the modified fabrics were washed with deionized water to remove the solvents and unreacted silane molecules.They were then subjected to 100 °C for 30 min in an oven for condensation of silane molecules 36,39,40 . Considering constant parameters such as time, temperature, pressure and amount of solvent, the optimal coefficient of the relationship (i.e.X number) was calculated.The most important unknown in Eq. ( 1) to obtain the amount of silane was the number of hydroxyl and carboxyl groups (−OH and −COOH) per gram unit of fabric (n OH ).This parameter was calculated using TGA test data and Eq. ( 2), in the way that for each water molecule separated from the surface in the specified temperature range, two surface −OH groups are separated 41,42 . The value of W H 2 O was obtained from Eq. ( 3) and fm the TGA test results for functionalized PET fabric (hydroxylated and carboxylated) in the temperature range of 120-350 °C.The number 18 was the molecular mass of water and its unit is grams per mole 5 .m modofiedPET was the weight loss percentage of carboxylated or hydroxylated fabric in the temperature range of 120-350 °C.m pristinePET was the weight loss percentage of PET fabric without surface treatment in the range of 120-350 °C. Tests and characterizations. Tensile test was performed on the pristine PET fiber and functionalized PET fibers in NaOH solution and GAP/UV irradiation according to the ASTM-D2256 standard at a test speed of 10 mm/min.H-pull test of PET cords from NBR rubber was performed according to ASTM-D4776 standard at ambient temperature with a pulling speed of 120 mm/min.Adhesion of the pristine, functionalized and silanized PET fabrics to NBR were evaluated by T-peel adhesion test according to ASTM D 413 standard at ambient temperature with a separation speed of 50 mm/min.Attenuated total reflectance infrared spectroscopy (FTIR-ATR) was carried out on the PET fabrics before and after treatments by Bruker EQUINOX 55 spectrometer.The surface of the treated and untreated fabrics was studied using a TESCAN-Mira III Field Emission-Scanning Electron Microscope (FE-SEM) equipped with energy-dispersive X-ray spectroscopy (EDX) device.Thermal Gravimetric Analysis (TGA) was performed with METTLER-TOLEDO analyzer at a heating rate of 10 °C/min under airflow from 50 to 600 °C. Results and discussions Characterization of functionalized PET.The tensile properties of pristine and functionalized PET fabrics were shown in Fig. 1. Results showed that carboxylation under GAP/UV irradiation caused a slight increment in the tensile strength but decrement in the modulus of the PET fabrics (see Fig. 1a,b).This was attributed to the influence of UV waves on the relaxation of PET chains from interlocks that caused chains movement but decreased the fibers crystallinity 32 . Figure 1c illustrates that hydroxylation by NaOH caused a decrement in the tensile strength of PET fabric.Moreover, considerable decrements in the modulus of the PET fabrics were observed at higher exposure times (see Fig. 1d).This corresponded to the internal phase hydrolysis of fibers in NaOH solution.However, hydrolysis at durations less than 120 min caused less fall in the tensile strength compared to the photo-chemically functionalized fabrics.On this basis, the samples that hydrolyzed for 60 and 120 min were selected for the next analysis and tests. Also, ATR-FTIR analysis of the functionalized fabrics was investigated.It was found that there are low OH groups on the pristine PET surface due to the weak peak appeared at the wavelength of 3500 cm −1 .However, the OH content was increased by UV-assisted functionalization.The peak appeared at about 1720 cm −1 was attributed to carbonyl group (-C=O) in the ester groups in PET structure that increased by functionalization due to the formation of COOH groups on the surface layer (see Fig. 2a). Figure 2b also presents the ATR-FTIR spectrums of hydroxylated PET fabrics.It is clearly seen that PET-NaOH60 sample shows intensified peaks at 3500 cm −1 and 1720 cm −1 that confirms better hydroxylation of PET surface in NaOH for 60 min compared to 120 min. Derivative thermogravimetric (DTG) results of the functioned PET fabrics were investigated in Fig. 3.The results are listed in Table 3. Figure 3a shows TGA curves of PET fabrics exposed to GAP/UV irradiation at different times.The weight loss between 50-120, 120-350 and 350-430 °C were attributed to the evaporation of humidity and low molecular (1) www.nature.com/scientificreports/weight organic matters, detachment of covalently bonded OH groups (i.e.related to -COOH groups) and PET degradation, respectively 43 .Decreasing weight at the temperature range of 120-350 °C which is related to the detachment of covalently bonded OH groups to PET surface confirmed the successful functionalization of PET by GAP/UV irradiation at different exposure times.Besides, the PET fabric exposed to GAP/UV irradiation for 90 min showed the highest functionalization content.On this basis, this sample was selected as the best carboxylated fabric and used in silanization step.The number of OH groups per gram of PET was also calculated based on the weight loss in a temperature range of 120-350 °C the results are illustrated in Table 3. DTG results showed that photochemical functionalization did not affect the crystallinity of the PET fabric due to negligible difference in the area under the endothermic curves (see Fig. 3b) but it decreased the degradation temperature somehow. Figure 3c demonstrates TGA curves of hydroxylated PET fabrics.Results showed that the hydroxylation process also created hydroxyl (OH) groups on the PET surface.Moreover, the sample that was treated for 60 min showed higher OH content (see Table 3).Figure 3d represents that the degradation temperature of PET slightly decreased due to hydrolysis. Comparing the weight loss for the chemically and photo-chemically functionalized fabrics shows that more functional groups formed on the PET surface via the later method. SEM micrographs of carboxylated and hydroxylated PET fabrics were investigated.Figure 4a illustrates the smooth surface of pristine PET fiber.Figure 4b,d depict that photochemical functionalization caused destructions on the fiber surface.Based on the SEM images and the tensile results, it could be assumed that the UV irradiation www.nature.com/scientificreports/caused degradation of just surface layer of the PET fibers.On this basis, these degradations could be considered as non-uniformities that improve the mechanical interlocking (i.e.bonding) between fibers and rubber matrix. It is obviously seen that hydrolysis by alkali solution caused surface grooves and severe destructions, especially in the sample that was exposed for 120 min to NaOH (see Fig. 4e,f).Table 4 shows the elemental characteristics (i.e.EDS results) of the functionalized fabrics.Results confirmed the functionalization of PET by both carboxylation and hydroxylation methods due to the increment in the elemental oxygen. Effect of silanization of PET fabrics. Based on the results of analysis and tensile properties, PET-GAP90 and PET-NaOH60 samples were selected as the best functionalized fabrics for the silanization step. XPS analysis of the pristine, hydroxylated, carboxylated and VTMS-modified PET fabric is shown in Fig. 5.The results show the intact PET fabric with two peaks of C1s and O1s in the PET graph.In the PET-GAP-90 sample, the amounts of C1s and O1s decreased and increased, respectively.Also, in the PET-NaOH60 sample, the amounts of C1s and O1s decreased.After surface modification with silane, two new peaks (i.e.Si2p and Si2s) were formed in both samples, which confirmed the successful surface modification of the PET fabric.In both samples modified with silane (i.e.PET-GAP-90-S and PET-NaOH60-S), the amount of C1s decreased greatly, which indicates the formation of Si-O-Si and Si-OH bonds. Figure 6 and Table 5 present TGA results for the silanized samples (i.e.PET-GAP90-S and PET-NaOH60-S).Results show that the silanization of the GAP/UV irradiated PET fabrics caused an increment in the weight loss of the sample at the temperature range of 120-350 (see Fig. 6a).This weight loss was attributed to degradation of grafted VTMS to the PET surface.It is obviously seen that the grafting of silane caused an increment in the thermal stability and the degradation temperature of the functionalized fibers (i.e. at temperature range of 350-450 °C) (see Fig. 6b).It was attributed to formation of an inorganic-organic layer on the PET surface. The silanization also improved thermal stability of the NaOH functionalized PET fabric (see Fig. 6c,d).However, the silane grafting on hydroxylated sample was so much lower than GAP/UV irradiated PET (see Table 5).This was attributed to lower OH groups that created on the PET fiber during alkali treatment and this decreased chance of VTMS grafting.The silane grafting ratios of the silanized PET fabrics are shown in Table 5. Figure 7 illustrates FE-SEM images and EDS analysis of the silanized fabrics.The images obviously showed that the surface of both samples was covered by a condensed silane layer that caused the disappearance of nonuniformities (see Fig. 7a,b).www.nature.com/scientificreports/ The elemental analysis (see Fig. 7c,d and Table 4) also confirmed successful grafting of the silane on the functionalized PET surfaces due to showing the presence of Si and C elements (i.e.these elements exist in the VTMS structure).However, the content of the element was very low in the case of alkali treatment. Adhesion tests. H-Pull test results . Figure 8 shows the pullout behavior of functionalized and silanized PET cords from NBR matrix.The pullout adhesion of the samples were calculated from pullout load-displacement curves (i.e.area under the curves).Figure 8a,b clearly demonstrate that by functionalization of PET surface by GAP/UV irradiation, the pullout force and adhesion of fibers to NBR decreased.This corresponded to the creation of polar groups (i.e.COOH) on the PET surface by functionalization process that decreased its interfacial interactions with the nonpolar rubber matrix.www.nature.com/scientificreports/However, the GAP/UV irradiated PET cords showed improved adhesion after silanization (see Fig. 8b).This was attributed to contribution of vinyl groups of silane (i.e.VTMS) in vulcanization process of NBR that bonded PET chemically to the rubber matrix 44,45 . Figures 8c,d depicts the pullout behavior of hydroxylated and silanized PET cords from NBR.It is seen that hydroxylation caused a decrement in the pullout force and adhesion.This was attributed to the creation of OH groups on the PET surface that made it hydrophilic and declined bonding to rubber matrix.In contrast to UV irradiated PET, of hydroxylated PET did not cause an increment in the adhesion.This attributed very low silane grafting content and also severe destructions occurred in this sample during alkali treatment that decreased the fibers strength.In fact, these fibers were during the pullout test. SEM images show the surface of pulled-out cords.It is clearly seen that there are a little rubber particles on the surface of the functionalized fibers before silanization (see Fig. 9a,c).After silanization, the rubber adhered to the fibers surface and diffused between them especially in the GAP/UV irradiated sample (see Fig. 9b). Figure 9d shows that the rubber diffused between the alkali-treated fibers but did not cover the fibers surface.This confirmed the results of the pull-out adhesion test. T-peel test results.Figure 10 shows the T-peel test results.It is clearly seen that functionalizations had negative effects on the T-peel adhesion of PET fabrics to NBR that was in good correlation with H-pull test results.However, GAP/UV irradiated PET fabric showed an increment in the T-peel adhesion after silanization while silanization had no influence on the adhesion of alkali-treated PET to NBR.This was attributed to the damages that occurred in alkali-treated PET fibers that caused their rupture during the T-peel test (i.e.cohesion defect in PET fibers instead of interfacial fracture).Figure 11 discloses SEM images of the PET fabrics after T-peel adhesion.The images show that rubber diffused between the filaments and appeared more on the surface after silanization.Figure 12 illustrates a schematic for interfacial interactions between silanized GAP/UV irradiated PET surface NBR. Conclusions In this work, the effect of chemical (i.e. by NaOH solution) and photochemical (i.e. by GAP/UV irradiation) functionalization on the thermo-mechanical properties of PET fabric and its adhesion to nitrile rubber (NBR) was compared.By increasing the hydrolysis time in NaOH solution from 60 to 120 min, the OH content decreased 5 times and the tensile strength and modulus decreased by 11 and 9%, respectively.Also, increasing the exposure time of PET fabric to UV irradiation by more than 90 min caused a reduction in the OH content by 29% and increased the tensile strength by 3.5%.Photochemical functionalization for 90 min created much more OH groups on the PET fabric surface (i.e.4.5 times) compared to the hydrolysis of PET fabric in NaOH for 60 min.It also increased the tensile strength of PET fabric by 5% compared to pristine PET while the hydroxylated PET fabric showed a slight decrement in the tensile strength.The results showed the functionalization of PET fabric by GAP under UV irradiation caused an increment in the silane grafting ratio up to 6.5 times compared to the hydrolyzed fabric.In the following, it was found that chemical and photochemical functionalization methods caused decrement in the pullout adhesion by about 30 and 15%, respectively.They also caused about 10% decrement in the T-peel adhesions to nitrile rubber.Functionalization of PET fabric by GAP/UV irradiation and then silane grafting caused 33 and 12% increments in the pullout and T-peel adhesion of PET to NBR, respectively.However, silanization had not the same impact on the hydrolyzed fabrics due to the lower silane grafting content and also more damages occurred during the functionalization process. Figure 1 . Figure 1.Tensile strength and modulus of the functionalized PET fabrics by; (a,b) GAP/UV irradiation and (c,d) NaOH. Figure 2 . Figure 2. FTIR Analysis of the functionalized PET fabrics by; (a) GAP/UV irradiation and (b) NaOH. Figure 3 . Figure 3. TGA and DTG analysis of the functionalized PET fabrics; (a,b) by NaOH and (c,d) by GAP/UV irradiation. Figure 7 .Figure Figure 7. (a,b) FE-SEM images and (c,d) EDS results for the silanized samples with different functionalized PET fabrics. Figure 12 . Figure 12.Schematic for surface functionalization of PET using GAP/UV irradiation and interfacial interactions between surface modified PET and NBR interface. Table 1 . Physical properties of the used PET fabric. Table 2 . The used nitrile rubber compound. Table 3 . TGA data for surface treated PETs.NA Avogadro number. Table 4 . The elemental analysis data of the functionalized PET fabrics. Table 5 . TGA Data for the silanized samples.
2023-09-06T06:17:32.382Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "3fa73a776ad408d39a960e2576e39b7837303a97", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-41432-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "445ad9fede69354f92c5e7d6fddc50cad1f155e1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
9885735
pes2o/s2orc
v3-fos-license
Identification of the Molecular Site of Ivabradine Binding to HCN4 Channels Ivabradine is a specific heart rate-reducing agent approved as a treatment of chronic stable angina. Its mode of action involves a selective and specific block of HCN channels, the molecular components of sinoatrial "funny" (f)-channels. Different studies suggest that the binding site of ivabradine is located in the inner vestibule of HCN channels, but the molecular details of ivabradine binding are unknown. We thus sought to investigate by mutagenesis and in silico analysis which residues of the HCN4 channel, the HCN isoform expressed in the sinoatrial node, are involved in the binding of ivabradine. Using homology modeling, we verified the presence of an inner cavity below the channel pore and identified residues lining the cavity; these residues were replaced with alanine (or valine) either alone or in combination, and WT and mutant channels were expressed in HEK293 cells. Comparison of the block efficiency of mutant vs WT channels, measured by patch-clamp, revealed that residues Y506, F509 and I510 are involved in ivabradine binding. For each mutant channel, docking simulations correctly explain the reduced block efficiency in terms of proportionally reduced affinity for ivabradine binding. In summary our study shows that ivabradine occupies a cavity below the channel pore, and identifies specific residues facing this cavity that interact and stabilize the ivabradine molecule. This study provides an interpretation of known properties of f/HCN4 channel block by ivabradine such as the “open channel block”, the current-dependence of block and the property of "trapping" of drug molecules in the closed configuration. Introduction Cardiac f-and their neuronal relatives h-channels play a key role in the control of heart rate and neuronal excitability. These channels have a tetrameric composition, with single subunits belonging to the Hyperpolarization-activated Cyclic Nucleotidegated (HCN) channel family. The HCN family includes 4 members (HCN1-4) that are differentially expressed in excitable tissues [1]. Each HCN subunit is organized according to a six transmembrane (S1-S6) structure, with an additional C-terminal cytosolic regulatory domain involved in cyclic nucleotide binding (CNBD) [1]. As in KcsA, Shaker, and HERG channels, also in HCN channels four subunits assemble to form a conduction pathway formed by the selectivity filter, a relatively large cavity lined by hydrophobic residues and the activation gate [2][3][4][5][6]. As well as playing a basic role in cardiac pacemaking, HCN channels have several important functions in neurons [7]. Increasing evidence suggests that the development of drugs targeting HCN channels may be useful as perspective bradycardic, anthiarrhythmic, anticonvulsant, analgesic and anaesthetic compounds [8]. Ivabradine is the first member of the ''heart ratereducing'' family (I f blocking agents) clinically approved by the European Medicines Evaluation Agency for the treatment of angina and heart failure. Given its clinical use, it is important to understand the molecular details of ivabradine block of HCN4 channels, the main isoform expressed in the pacemaker region of the heart. Some of the basic properties of the molecular interaction between ivabradine molecules and native f/HCN channels have been already clarified. For example it is known that drug molecules act intracellularly [9], and that the block is strongly state-dependent since it can only occur after channel opening [9][10][11]. In a study of mHCN1 block by ZD7288, another HCN blocking molecule interacting with pore-lining channel residues, Shin et al. [2] have shown that blocking molecules are "trapped" within channels in the closed state. Block of HCN4/f channels by ivabradine also has the unusual property of being "currentdependent", since it depends upon the flow of ions through the pore (kick-in/kick-off mechanisms). A tentative interpretation of this phenomenon predicts that the positively charged quaternary N + ion of ivabradine antagonizes Na + /K + permeating ions in their binding sites in the pore [9][10][11], but no evidence has been provided yet to support this hypothesis. Despite the understanding of the basic features described above, no information is available yet on the specific interaction between ivabradine and the residues of the HCN4 channel and on the molecular details of block. We therefore set out to investigate the HCN4 channel block by ivabradine with three complementary approaches: 1) in silico analysis through homology molecular modeling (seeking information on the 3D structure of the channel pore); 2) mutagenesis and electrophysiological characterization of the interaction between ivabradine and mutated HCN4 channels; 3) in silico molecular docking, providing insight into the drug binding mode. Our data identify some specific residues in the S6 domain lining the internal mouth of channel pore acting in concert to bind ivabradine and provide a molecular-based explanation of known features of block. Homology Modeling In the absence of the crystal structure of the hHCN4 channel, a homology model of the pore region (S5-P-S6 region) of hHCN4 was obtained based on the Streptomyces lividans K + channel (KcsA) X-ray structure, both in the closed (PDB-code 1BL8; [12]) and open forms (PDB-code 3FB7; [13]) (Fig. 1A,B). Sequence alignment was performed by first identifying potential transmembrane regions of the HCN sequences spanning the S5-P-S6 region with the programs TopPread 2 (S5: V419-V439; S6: V492-H512) [14], TMHMM v2.0 (S5:V419-M441; S6: L494-L516) [15], and HMMTOP (S5: I418-L442; S6: V492-A515) [16]. These regions were then used to guide the sequence alignment with the TM1-P-TM2 region of KcsA performed by CLUSTALW [17]. While the alignment of the HCN S6/TM2 region was readily achieved (based on the register provided by the GYG motif in the selectivity filter SF), the alignment of the predicted HCN S5/TM1 helix was not trivial, primarily because of the long S5/ TM1-P linker in HCN and limited amino acid conservation with KcsA. For this reason we introduced a third sequence, from the rat Kv1.2 channel, to provide additional information useful to search for the proper register among KcsA, HCN4 and Kv1.2 S5/TM1. The KcsA and Kv1.2 sequences were first structurally aligned by superimposing their structures in the open state (PDB-codes 3FB7 and 2A79, respectively). The boundaries of S5/TM1 helix where then defined by identifying the conservation of HCN4 A414, R417 (in the first turn of the S5/TM1 helix in both KcsA and Kv1.2), L421, L426, and G433, thus positioning HCN4 P440 at the Cterminus of the S5/TM1 helix, as expected for a proline residue. A long gap was inserted between the S5/TM1 and the pore-helix in the KcsA sequence to align the predicted S5/TM1 and S6/TM2 helices of the HCN channels with those of the template (Fig. 1B). This alignment maximizes the overlap between TM prediction and sequence similarity to KcsA and Kv1.2 channels. The tetrameric channel template was reconstructed by applying the appropriate crystallographic symmetry operations to the crystal structures 1BL8 and 3FB7. Then, the modelling of the tetrameric hHCN4 channel was performed by the program Modeller9v3 [18] using the model-multichain symmetry option. Ten models were generated and evaluated by using the discrete optimized protein energy (DOPE) score [19]. The best model was energy-minimized using the optimize procedure as implemented in the Modeller9v3 program and the stereochemistry further optimized by the stereochemical idealization procedure as implemented in the program REFMAC [20]. The program Procheck [21] was used to assess stereochemical quality. Similar protein modeling was adopted for the hHCN4 mutants Y506A, I510A, and Y506A-I510A, with the channel in the closed form, and for the F509A mutant with the channel both in the open and closed form. hHCN4 Mutagenesis and Expression in HEK293 Cells Point mutations were introduced into hHCN4 cDNA using QuikChangeH II XL site-directed mutagenesis kit (Stratagene) and confirmed by DNA sequencing. Human Embryonic Kidney cells (HEK 293, Phoenix) were cultured in Dulbecco's MEM plus GlutaMAX TM -I supplemented with 10% fetal bovine serum (GIBCO BRL) and antibiotics (PenStrept, SIGMA, Italy) at 37uC in 5% CO 2 . Expression vectors (pCDNA1.1) containing either the wild type (WT) or mutated hHCN4 cDNAs and a vector with the green fluorescent protein (pmaxGFP, Amaxa Biosystems) were co-transfected in HEK cells using either the Lipofectamine TM Reagent (Invitrogen) or the FuGENEH HD Transfection Reagent (Roche). Cells were incubated at 37uC (5% CO 2 ) for 2-3 days to allow for a good level of protein expression prior to electrophysiology experiments. On the day of the experiment, cells were detached and dispersed by trypsin and plated at a low density on a 35 mm plastic Petri dish. The dish was then placed under the stage of an inverted fluorescence microscope, and GFP-expressing cells were selected for voltage-clamp analysis. Voltage-clamp Recordings All experiments were carried out in the whole-cell configuration at the temperature of 3260.5uC. The recording pipettes contained (in mM): NaCl 10, K-Aspartate 130, MgCl 2 0.5, EGTA-KOH 1, HEPES-KOH 1, ATP (Na-salt) 2, GTP (Na-salt) 0.1, phosphocreatine 5, (pH 7.2). The control extracellular solution contained (in mM): NaCl 140, KCl 5.4, CaCl 2 1.8, MgCl 2 1, D-glucose 5.5, HEPES-NaOH 5, (pH 7.4); BaCl 2 1 mM, and MnCl 2 2 mM were added to improve dissection of the HCN current. Ivabradine was added to the extracellular solution by dissolving a stock solution (10-50 mM) to the desired final concentration. Currents were recorded and filtered online at a corner frequency of 1 KHz with an Axopatch 200B amplifier, and acquired using the pClamp 10.1 software. Activation curves for HCN currents were obtained by the following voltage protocol: from a holding potential of 225/ 235 mV test voltage steps ranging from -40 to -145 mV (15 mV interval) were applied until steady-state current activation was attained at each potential; test steps were then followed by a pulse to 2130 (or 2145 mV in some protocols) and by a deactivating pulse to +10 mV. Time constants of activation (at 2140 mV) and deactivation (at +5 mV) were obtained by fitting with a monoexponential function the time-dependent current traces; the initial delay [22] was ignored. Ivabradine block of WT hHCN4 channels was investigated by superfusing the drug during repetitive (0.5 Hz) application of activating (2140 mV, 0.6 s)/deactivating (+5 mV, 0.3 s) steps, from a holding potential of 235 mV. The fractional current block was calculated as the ratio between block-induced current reduction and control current (at 2140 mV). For some mutant currents this protocol was modified to account for changes in the current kinetics as follows: for mutants Y506A, A507V, F509A, Y506A-F509A the duration of the activating step was increased to 1 s; for mutant I510A the duration of the activating step was shortened to 0.2 s; for mutant F509A the deactivation step duration and the frequency of stimulation were set to 2.5 s and 0.25 Hz, respectively, to ensure complete deactivation of the channel. Dose-response relationships were obtained by fitting experimental data with a Hill equation (Y = Y max *(1/(1+ (IC50/ x)n H ))). Only in those experiments where drug recovery was complete, a second dose of the drug was tested; in all other cases each cell was exposed to a single drug concentration. All data are presented as mean6SEM values. Statistical analysis was performed with the Student's t-test for unpaired data. Dose-response curves were compared using the Extra sum-of squares F test (GraphPad Prism 5). Significance was set at P,0.05. Ivabradine Docking in hHCN4 Wild-type and Mutant Channels Docking of ivabradine to hHCN4 tetramer, both in the closed and open form, was performed with the program AutoDock4.0 [23] which combines a rapid energy evaluation through precalculated grids of affinity potentials with a variety of search algorithms to find suitable binding positions for a ligand on a given protein. When docking, the structure of hHCN4 was kept rigid, but all the torsional bonds in the ivabradine molecule were set free to perform flexible docking. Polar hydrogens were added by using the Hydrogens module in AutoDock Tools (ADT) for hHCN4; after that, Kollman united atom partial charges were assigned. Docking was carried out using the Lamarckian genetic algorithm, applying a default protocol for 100 independent docking runs. Results were clustered according to the 2.0 Å root-mean-square deviation (RMSD) criterion. The grid maps representing the proteins in the actual docking process were calculated with AutoGrid, with a spacing of 0.375 Å between the grid points. The grid size was chosen (and centered) to be sufficiently large to include the internal channel at the tetrameric hHCN4 interface. Similar ligand docking procedures were adopted for the hHCN4 mutants Y506A, I510A, and Y506A-I510A, with the channel in the closed form, and for the F509A mutant with the channel both in the open and closed form. Residues Facing the Water-filled Cavity Previous results have suggested that ivabradine exerts its blocking action on f/HCN channels by binding to a site within the aqueous cavity in the inner mouth of the pore [10,11]. A similar binding location has been proposed for mHCN2 block by cilobradine [24], a structural analog of ivabradine, and also for mHCN1/mHCN2 block by ZD7288, a structurally different molecule [2,24,25]. To identify residues potentially involved in drug binding, we first explored the spatial orientation of residues facing this inner cavity by means of an in silico homology 3D model of the pore region (S5-S6 region) of hHCN4. Homology models were obtained based on the Streptomyces lividans K + channel (KcsA) Xray structures, both in the closed [12] and in the open conformation [13] (Fig. 1A). We selected KcsA as a model rather than Kv1.2 (both are comparably similar to HCN4 in terms of The secondary structure elements, as defined in the crystal structure of KcsA, are indicated. P is pore, SF is selectivity filter. Residues of HCN channels identical and similar to those of KcsA and/or Kv1.2 are highlighted by green and yellow boxes, respectively. Residues that face the internal cavity of the hHCN4 channel in the open and closed forms and residues relevant to mHCN2 block by cilobradine [24] and to mHCN1/mHCN2 block by ZD7288 [2,24,25] are indicated in red bold font. doi:10.1371/journal.pone.0053132.g001 sequence alignment) for three main reasons: (i) the KcsA structure is known in both the closed and open forms, while only the open structure is available for Kv1.2; (ii) the KcsA structure has been already used successfully as a model template for the HCN2 channel [24]; (iii) the C-terminal part of the TM2/S6 helix in Kv1.2 hosts two almost consecutive proline residues (P405, P407) which confer to the helix a structure divergent from that of KcsA, where these proline residues are missing (Fig. 1B). Since HCN sequences do not have proline residues in the TM2/S6 helix, and prolines usually alter the normal H-bonding pattern of helices, the choice of KcsA as a model for HCN4 seemed more appropriate. Both the KcsA-based open and closed models delineate a cavity lined by residues of the S6 segments and by the lower part of the pore (P) region, which includes the selectivity filter (SF). More specifically, in the closed configuration model the inner cavity of the hHCN4 channel pore lays below the selectivity filter and is lined by residues L477, C478, A503, Y506, A507, and I510. Note that the side chains of residues Y506 and I510 are arranged in a sort of double-layer crown which delimits the floor of this cavity (Fig. 1A, left). In the open form, the inner cavity is lined by residues L477, C478, A503, Y506, A507, F509, and I510, and the diameter of the channel at the cytosolic entrance is ,6.5 Å larger than in the closed structure (distance calculated between I510 side chains of opposite subunits). In the open conformation the floor of the inner cavity, even if looser, is still delimited by a double-layer crown of residues (Y506 and F509, Fig. 1A, right). In Fig. 1B the sequence alignment of the S5-P-S6 regions of hHCN4 with the corresponding regions of mHCN1, mHCN2, KcsA, and Kv1.2 is shown. The alignment reveals that most of the hHCN4 residues facing the internal cavity (L477, C478, A503, Y506, A507, F509, and I510, bold red) match those previously reported to be involved in ZD7288 and cilobradine interactions with mHCN2 (A425, I432, bold red) [24] and mHCN1 (C347, Y375, M377, F378, V379, bold red) [2,25]. It is important to note that while residue M377 of mHCN1 is reported to be potentially involved in the blocking action of ZD7288 [2], the corresponding residue (M508) in our models points towards the S5-S6 interface and does not line the internal cavity in either closed or open forms of the hHCN4 channel (Fig. 1A). Based on the above modeling information, we generated single mutants by replacing L477, C478, A503, Y506, A507, M508, F509, and I510 with alanine or, when the native residue was an alanine, with valine, in order to investigate which of the potentially interfering residues are involved in ivabradine binding. Ivabradine Action on hHCN4 WT and Mutant Channels All mutations, except L477A were associated with functional channels when expressed in HEK293 cells and the biophysical properties are listed in Table S1; as shown in Fig. S1 and Table S1, the L477A mutant was normally expressed in the plasma membrane, but did not generate functional currents. As apparent from the data in Table S1, some of the mutants underwent significant changes in the voltage dependence of gating (V 1/2 and time constants of activation/deactivation). Since the blocking action of ivabradine (30 mM) on hHCN4 WT and mutant currents was evaluated during trains of activating/ deactivating steps (2140/+5 mV), voltages at which all types of channels were fully open or fully closed, respectively, these changes did not affect the measurement of mean steady-state ivabradine block. The time-courses of current amplitude at 2140 mV during drug superfusion and sample current traces in control (a) and at steady-state block (b) are shown in Fig. 2A,B, left and right panels, respectively. In all cases ivabradine caused a reduction of the current which accumulated over time until steady-state block was attained, normally within the first 50-100 s of drug perfusion. Mean steady-state percent block values were: 88.861.6% (n = 7), 87.663.4% (n = 5), 83.661.9% (n = 5), 33.363.5% (n = 5), 81.963.8 (n = 5), 90.362.0% (n = 6), 41.962.1% (n = 5), and 38.063.1% (n = 4) for WT, C478A, A503V, Y506A, A507V, M508A, F509A, and I510A channels, respectively. Statistical analysis revealed that ivabradine blocks C478A, A503V, A507V, and M508A channels as efficiently as WT hHCN4 channels, but exerts a significantly less efficient block on Y506A, F509A, and I510A channels (t-test, P,0.05 vs WT channels). For fuller comparison, we extended the investigation of ivabradine-induced block to a wider range of drug concentrations. The resulting doseresponse curves and fitting parameters are shown in Fig. 2C. The mutant dose-response curves clearly fall into two categories, those which essentially overlap the hHCN4 WT curve (C478A, A503V, A507V, and M508A) and those which show a much reduced (some 20/30-fold lower) sensitivity to the drug (Y506A, F509A, and I510A). These data indicate that while C478A, A503V, A507V, and M508A mutations do not affect the efficiency of hHCN4 channel block by ivabradine, reduced block efficiency is obtained with the mutations Y506A, F509A, and I510A. This suggests that residues Y506, F509, and I510 may be involved in the drug-channel interaction in agreement with the homology modeling data indicating that the same residues face the internal surface of hHCN4 pore and form the double layer crown composing the floor of the cavity in the closed (Y506 and I510) and open (Y506 and F509) states (Fig. 1A). Based on these observations, we proceeded to verify the efficacy of ivabradine block of the double mutants Y506A-I510A and Y506A-F509A. Both double mutants elicited functional currents. Properties of the double-mutant currents are provided in Table S1. The time-courses of current amplitude at 2140 mV during perfusion with ivabradine 30 mM and sample current traces in control (a) and at steady-state block (b) are shown in Fig. 3A, as indicated, for the double-mutant channels investigated. The doseresponse curves of ivabradine block in Fig. 3B show that while the Y506A and I510A mutations have a cumulative effect (IC 50 = 2213.0 mM) this does not occur for the Y506 and F509A mutations (IC 50 = 43.7 mM). This seems to suggest that a structural re-arrangement essential to determine ivabradine sensitivity is common to Y506A and F509A, and that once this has been rendered by Y506A, further mutating F509 does not provide a supplemental effect. We also tested the effect of the triple mutation Y506A-F509A-I510A and verified that the blocking affinity of ivabradine for this channel is similar to that of the double mutant Y506A-I510A (IC50 of 1215.0 and 2213.0 mM, respectively; P.0.05, Fig. 3B); this further supports the lack of cumulative action of F509A. Thus, mutation F509A is effective when alone, but not when in combination with Y506A or Y506A-I510A. As shown in Fig. 1 The results in Fig. 4 imply that the F509A mutation does not modify block developing when channels are in the open state, and agree with the hypothesis apparent from the data in Fig. 3 that the F509A mutation affects the closed channel state, possibly through the same structural rearrangement associated with the mutation Y506A. Ivabradine Docking in WT hHCN4 Channels The results in Figures 2, 3, and 4 indicate that residues Y506, F509, and I510 are important determinants of hHCN4 channel block by ivabradine, with a cumulative action observed for residues Y506 and I510, but not for residues Y506 and F509, and a higher efficiency of block predicted for the closed conformation of channels. The question whether or not these residues directly interact with ivabradine was addressed by means of an in silico docking approach using a homology model structure of the hHCN4 tetrameric pore region. A similar approach was recently applied to describe the block of HCN2 channels by the drug ZD7288 [24]. We first analyzed the docking of ivabradine to the hHCN4 WT channel in its closed state using the estimated Free Energy of Binding (DG b ) as the scoring function. The best clustered docked models (consisting of 4 models over 100 trials) display an average DG b of 210.2 kcal/mol, with the bound ivabradine having its quaternary N atom approximately along the axis of the channel pore and the benzazepinone and benzocyclobutane moieties localized in two of the four hydrophobic pockets lined by L477, C478, A503, and Y506 from different subunits (Fig. 5A). The docked ivabra\dine molecule is stabilized by several van der Waals and hydrophobic interactions with at least one of its heterocyclic moieties (both in the best ivabradine docking pose) making stacking interactions with the Y506 side-chains which build the floor of the cavity (Fig. 5A); only one ivabradine molecule is hosted in the cavity. Note that the average distance of the ivabradine quaternary N atom from the innermost permeating ion binding position (lowest dot in Fig. 5A) is 2.6 Å , calculated over the 4 models of the best docking cluster; this distance reduces to only 1.5 Å for the best-docked pose (DG b = 210.5 kcal/mol) within the best docking cluster (Fig. 5A). In this latter case the positively charged ivabradine quaternary N atom lies at a distance of 3.1-3.7 Å from each of the four carbonyl oxygen atoms of C478 of the selectivity filter, thus suggesting that the best stabilization of the bound ivabradine to the tetrameric closed channel may include additional H-bonds during the dynamic ligand-receptor interactions. An important observation emerging from the docking analysis is the structural role played by residues Y506 and I510. In the tetrameric hHCN4 closed channel model, the four Y506 sidechains point to the interior of the channel and form the floor of the cavity hosting ivabradine (Figs. 1A left, 5A). Furthermore, I510 residues fall right below Y506 residues, and by means of hydrophobic interactions may stabilize the orientation of the Y506 side-chains (Figs. 1A left, 5A). We then analyzed ivabradine docking to the hHCN4 WT channel in the open form. As pointed out in Fig. 1A, right, when the channel is open, the internal cavity takes a more relaxed and enlarged conformation. The best clustered ivabradine docked models (2 models over 100 trials) showed an average DG b = 28.04 kcal/mol, a value smaller than that estimated for the closed WT hHCN4 channel (DG b = 210.2 kcal/mol). In the open channel, docked ivabradine molecules adopt a bent conformation, with the benzazepinone and benzocyclobutane moieties almost parallel to each other and stabilized by stacking interactions to the aromatic side-chains of Y506 and F509 (Fig. 5B). The ivabradine quaternary N atom is now positioned far below the permeating ion binding site (distance of 8.5 Å calculated over the 2 docking solutions of the best cluster), in the region of the channel lined by the F509 side-chains. Ivabradine Docking in hHCN4 Mutant Channels To verify if modeling is able to explain changes in block efficiency as those found in electrophysiology experiments with mutant Y506A and I510A channels (Figures 2 and 3), we performed additional modeling and docking calculations for the two single mutants Y506A and I510A and for the double mutant Y506A-I510A (Fig. 6). Inspection of the Y506A mutant model structure indicates that mutation increases the volume of the docking cavity (Fig. 6A); the floor of the ivabradine docking cavity is now built by the I510 side-chains. The best clustered docking models for Y506A mutant indicate for ivabradine a bent conformation with the benzazepinone and benzocyclobutane moieties almost orthogonal to each other, with one moiety located in the pocket lined by C478, A503, and the mutated A506, and the other moiety fitting most of the volume which in the hHCN4 WT channel is occupied by the Y506 side-chains, close to I510 sidechains. The best docking cluster (3 models over 100 trials) for the Y506A mutant, with the channel in the closed form, show an average DG b = 28.02 kcal/mol, a value smaller than that estimated for the closed WT hHCN4 channel (DG b = 210.2 kcal/mol). This indicates that the lack of the Y506 side-chain decreases the stabilization of the bound ligand and allows for a different spatial position of ivabradine, with its quaternary N atom displaced from the center of the pore and more distant from the lowermost permeating ion binding site (4.3 Å averaged over the 3 models of the best solution cluster). When we modelled the I510A mutant, we noticed that the loss of the hydrophobic interactions between Y506 and I510 sidechains allows the Y506 side-chain of each subunit of the tetrameric channel to re-orient away from the centre of the pore (Fig. 6B). Indeed, the best I510A mutant model shows that now each Y506 side-chain is located in a subunit-subunit interface, in a cavity lined by L447, G502, and F509 from one subunit, and I510, A503, and T504 from the adjacent subunit. Such new orientation of Y506 determines a structure of the lower part of the channel and a docking behaviour of ivabradine similar to that found in the Y506A mutant (Fig. 6A, B), with the best docking cluster (3 models over 100 trials) having an average DG b = 28.29 kcal/mol. This agrees with the evidence above that both the Y506A and I510A single mutants are characterized by a much lower affinity to ivabradine than both the WT and any other single mutant hHCN4 channel (Fig. 2). Modelling of the Y506A-I510A double mutant shows that the double crown side-chain layer making the floor of the internal cavity in WT hHCN4 closed channel is now completely removed, which allows for an enlarged volume of the docking cavity at the cytosolic side of the channel (Fig. 6C). As a consequence, the docked ivabradine molecule can span a wider number of conformations, with the benzazepinone and benzocyclobutane moieties now fitting in the cavity lined by C478, A503, and A506 side-chains and in the cavity below lined by A503, A506, and A510, respectively. The best docking cluster for the double mutant (3 models over 100 trials) has an average DG b = 26.7 kcal/mol, much smaller than those of the hHCN4 WT and of Y506A and I510A single mutants, with the best docking-pose of ivabradine in a bent conformation reminiscent of that found in the Y506A and I510A mutants (Figs. 6A,B,C). We then further analyzed the role of the F509 residue by means of modeling and docking experiments with the F509A mutant in both the open and closed forms. The open form of the F509A mutant channel shows a more enlarged conformation of the lower part of the channel cavity when compared to that of the WT hHCN4 (Fig. 6D). The best docking cluster (3 models over 100 trials) shows an average DG b = 27.16 kcal/mol. In the F509A open channel mutant the docked ivabradine molecule adopts an elongated conformation along the channel axis with the benzazepinone and benzocyclobutane moieties almost orthogonal to each other, the first hosted in the pocket lined by L477, C478, A503, and Y506 and the second in the pocked lined by Y506, A509 and I510 (Fig. 6D) (Fig. 4). Interestingly, when we modelled the F509A mutant in the hHCN4 closed form, we observed that the absence of the F509 side-chain promotes an overall rearrangement of the surrounding structure. Indeed, as shown in Fig. 6E and Fig. S2, the removal of the F509 side-chain makes room for the Y506 side-chain of each subunit, which may thus move from the centre of the pore towards the subunit-subunit interface as seen in the I510A mutant (Figs. 6B). For the best ivabradine cluster results (6 models over 100 trials) the average DG b = 28.95 kcal/mol is similar to that found for the I510A mutant (DG b = 28.29 kcal/mol) and smaller than that found for WT hHCN4 (DG b = 210.2 kcal/mol). Modeling thus provides an explanation for the block data obtained with the F509A mutation (Figs. 1 to 4) and confirms the hypothesis that the reduction of block sensitivity for this mutant is an indirect effect associated with the displacement of residue Y506. Discussion The relevance of the I f current to cardiac pacemaker generation and modulation of rate has been documented both in experimental animals and in humans where it represents an important pharmacological target [26]. A detailed understanding of drugchannel interaction is therefore essential in the perspective of improving HCN isoform-specific selectivity of block, particularly since differential isoform distribution in heart and brain may underlie isoform-dependent pathologies including for example arrhythmias, epilepsy, motor learning defects, pain transmission [7,8,27,28]. We therefore sought to investigate details of ivabradine-induced block by identifying residues involved in the binding of ivabradine to HCN4, the HCN isoform most highly expressed in the sinoatrial node. Previous studies have suggested that the cytoplasmic side of HCN channels is composed of a waterfilled cavity guarded by an intracellular activation gate and that the binding site for HCN blocker drugs such as ZD7288, ivabradine, and cilobradine is located within this central cavity [2,3,10,11,24,25]. Although these studies have indicated the involvement of some specific residues, they did not provide an integrated and detailed description of the structural arrangement of the cavities in the channel open and closed states, and how this arrangement affects ivabradine block. The homology models shown in Fig. 1A aim to fill this gap by providing structural information on the spatial organization of residues in the S6 domain and in the S5-S6 linker that line the internal cavity in both open (L477, C478, A503, Y506, A507, F509, and I510) and closed (L477, C478, A503, Y506, A507, and I510) channel states. A similar 3D modeling approach, has been previously proposed for mHCN2 [24], though limited to the open state of the channel. According to Cheng et al.'s study [24], mHCN2 residues A425 and I432 (structurally homologus to A503 and I510 of hHCN4) face the internal cavity of channels, in agreement with our data. As shown by our previous studies [10,11], the mechanism of action of ivabradine block is quite complex. The drug can only access its binding site, located in the internal channel cavity, when f/HCN4 channels are in the open state. Also, block is current- shown as ribbon in green yellow, orange and pale blue, respectively. Side-chains of residues interacting with the ivabradine molecule are shown in stick representation in both panels. For the closed state only, the main-chain atoms of C478 are shown since the carbonyl oxygen atoms may form additional H-bonds with ivabradine. I510 is also shown in the channel closed form, though this residue does not interact directly with ivabradine. Black spheres indicate positions corresponding to K + ion bound in pore of the KcsA crystal structure [12]. For clarity only residues of one subunit are labeled and L477 of the pale blue subunit is omitted. doi:10.1371/journal.pone.0053132.g005 dependent since the inward current flow at hyperpolarized potentials removes block (kick-off), while depolarization favors block development (kick-in); this mechanism is responsible for the use-dependence of ivabradine. In addition, the property of current-dependence suggests that the ionic flow through the channel pore proceeds according to a multi-ion single file permeation model, and that the charged nitrogen of ivabradine competes with permeating Na + /K + ions at one of the coordination sites along the permeation pathway, most likely the innermost one [10]. Since the present interpretation of mechanisms underlying the interaction between ivabradine and HCN channels relies essentially only on electrophysiological data, we used a combination of alanine-scanning mutagenesis and 3D-modeling and docking in the attempt to resolve the molecular basis of channel block. The first important information yielded by the modeling approach is the indication that when the channel is in the open state, the smallest diameter of the internal mouth of the cavity is of about 11 Å (calculated between F509 side-chains of opposite subunits), which is enough to allow ivabradine to access the cavity in a dynamic structural context. On the contrary, when the channel is in the closed state, ivabradine cannot enter since the smallest diameter of the internal cavity is about 4-5 Å (calculated either between the T514 or between the I510 side-chains of opposite subunits). These data thus provide an explanation for the known "open channel" block property of ivabradine [10] and for the "trapping" of blocking molecules within the channel in the closed state, a property described also for the block of mHCN1 by ZD7288 [2][3][4][5][6]. Electrophysiological analysis of mutant block by ivabradine indicate that the residues forming the floor of the closed cavity (Y506 and I510, Fig. 1A) are major determinants of ivabradine block measured during activation/deactivation protocols (Figures 2 and 3). These data indeed show that block is similarly reduced in Y506A and I510A mutant channels ( Fig. 2; IC50 = 57.7 and 47.7 mM, respectively), and is further strongly reduced in the double mutant Y506A-I510A (Fig. 3, IC50 = 2213.0 mM). The structural position occupied by residues Y506 and I510 in hHCN4 appears to be critical for drug binding also in hERG channels, since the corresponding aminoacids (Y652 and F656) have been identified as critical elements for the interaction with several structurally unrelated drugs [29][30][31]. A molecular rationale to interpret the block data of HCN4 channels can be obtained from 3D-modeling. As shown in Fig. 5, the ''best'' pose of drug molecules within the channel differs substantially in closed and open channels. In the closed state, the position occupied by ivabradine is proximal to the internal pore end and is stabilized by a floor formed by the side-chains of residues Y506 and supported by I510 residues, representing the inner boundary of the channel cavity. In the open channel state the floor of the cavity is partly disassembled, and as a consequence the cavity is wider and ivabradine finds a stable binding site in a position farther away from the pore; given the larger opening of the inner cavity (about 11 Å ), ivabradine molecules are not trapped anymore and move easily across it. As discussed above, our simulations highlight a significant reshaping of the docking cavity when channels change from the open to the closed state. As well as acquiring a more poreproximal position relative to the open state, in the closed state drug molecules modify their orientation such that the quaternary N atom moves from a more peripheral to a more central position relative to the pore axis. In closed channels, the position acquired by the N-atom is sufficiently close to the lowermost of the binding sites of permeating ions (1.5 Å ) to antagonize their binding. This provides a satisfactory explanation of the property of ''current dependence'' of block previously described [10]. 3D-docking model analysis indicates that the values of the DG b of ivabradine binding to the two mutant Y506A and I510A closed channels are similar (28.02 kcal/mol and 28.29 kcal/mol, respectively) and are both smaller than that for the WT closed channels (210.2 kcal/mol). This agrees with the experimental evidence presented in Fig. 2 that block is similarly reduced for Y506A and I510A relative to WT, and further stresses the evidence that modeling and electrophysiological data converge in the interpretation of block features. Docking analysis of ivabradine in WT closed channel (Fig. 5A) shows that the drug interacts directly with Y506 but not with I510. However I510 residues lie beneath the Y506 side chains and their interactions maintain the orientation of both residues toward the centre of the cavity. In the I510A mutant channel, these interactions are lost and, as a consequence, the Y506 side-chains can now change their orientation in a way which decreases the stabilization of the bound ligand (Fig. 6B). Results obtained with the double mutant Y506A-I510A, where according to 3D modeling the integrity of the floor of the cavity is disrupted in the closed configuration (Fig. 1), concur to indicate that residues Y506 and I510 are major determinants of block efficiency (Fig. 3, 6C). Consistent with block data, the DG b is smaller for the double mutant (26.70 kcal/mol) than for either of the two single mutant channels. As for the open channel, the experiments with double and triple mutants (Fig. 3) reveal that the role of the floor of the cavity, formed mainly by residues Y506A and F509A, is less critical. While block of the single F509A mutant is reduced, indicating a role for this residue in block determination, data in Fig. 3 and Fig. 4 rule out a direct contribution. According to the analysis of ivabradine docking to F509A channels (Fig. 6D, E), this can be explained by assuming that the F509 side-chain exerts a spatial constraint on the orientation of Y506, and its removal causes a rotation of the Y506 side-chain, an effect able to destabilize ivabradine binding. This is indeed verified when comparing the orientation of Y506 side-chains in the F509A mutant channel (Fig. S2). Comparison with Previous Studies Our hHCN4 model can be compared with previous studies of other HCN channels based on homology modeling, scanning accessibility and/or drug interaction experiments. Investigation of spHCN and mHCN1 channel block by ZD7288 has shown that residues homologous to hHCN4 Y506, M508 and I510 are accessible to drug block and therefore face the internal cavity of channels [2]. These results agree with our finding that Y506 and I510 face the pore, while in our model M508 does not line the internal cavity but rather points towards the S5-S6 form. Side-chains of residues relevant to ivabradine binding are shown as ball-and-stick in all panels. In all mutant models the best pose of the docked ivabradine is shown in magenta, while the S5-P-S6 regions of the four hHCN4 subunits are shown as ribbon in grey, yellow, orange and pale blue, respectively. For clarity, residues have been labelled only in one hHCN4 subunit, with mutated residues indicated in bold characters. doi:10.1371/journal.pone.0053132.g006 interface. It is interesting to note, however, that Chan et al. [25] recently found that mHCN1 M377, homologous of hHCN4 M508, does not face the cytoplasmic side of the pore. The same authors [25] also reported that mHCN1 residues homologous to hHCN4 C478, C505, F509 and I510 face the internal cavity in agreement with our data. mHCN2 residues homologous to hHCN4 A503 and I510 also face the internal cavity as in our model according to Cheng et al. [24]. Our hHCN4 model is also consistent with cysteine-scanning mutagenesis experiments showing that spHCN C428 (homologous to hHCN4 C478) faces the cytoplasmic side of the channel pore, while spHCN K433 (homologous to hHCN4 R483) faces the extracellular vestibule [32]. Further investigation by cysteine-scanning mutagenesis [3,33] has shown that spHCN residue Q468, corresponding to hHCN4 Q518, is accessible in the closed channel state, while T464, corresponding to T514, is not. These results suggest that the narrowest region of the crossing bundle lies in the vicinity of the end of S6. Giorgetti et al. [4] identify spHCN Q468 as the position of narrowest opening of the spHCN crossing bundle in the closed channel state (their Fig. 8). Although we have not performed accessibility experiments, our homology 3D model predicts that the narrowest region of the channel (in the closed state) is located close to T514 (thus slightly upstream of that reported for spHCN channels) with Q518 protecting T514 from the solvent at the cytosolic side. In our model Q518 is therefore accessible even in the closed channel state, while T514 is not. Giorgetti et al. [4] used the crystal of the MthK channel as a template to model the open spHCN channel. Since this model failed to explain experimental results with cysteine-scanning and Cd 2+ block experiments [3,33], an additional constraint was applied by the authors to reorient the C-terminus of the S6 helix which reduced the bending of helix S6 of about 18u relative to MthK. More recently the open form of the KcsA channel has been published [13], and we took advantage of this knowledge to model the open form of hHCN4. In our model we did not impose any additional rotational constraint, and as a result the orientation of the C-terminal part of the helix S6 in hHCN4 is about 10u less kinked (at G502) than the MthK S6, a bending intermediate between those of the spHCN model and the MthK crystal structure. Conclusions Our data indicate that the ivabradine binding site is located within the inner cavity of HCN4 channels, where the bound ivabradine is stabilized by several van der Waals and hydrophobic interactions. Although this happens both in the closed and in the open channel forms (Fig. 5), the drug binding mode differs substantially in the two channel states. In particular, the position acquired by the ivabradine N-atom in the closed channel is sufficiently close to the lowermost permeating ion binding site to antagonize their binding, in agreement with current-dependent block. According to our results, the major determinant of ivabradine binding to the channel is the structural integrity of the floor of the cavity in the closed channel, represented by the double crown built by residues Y506 and I510 in the tetrameric assembly. Our data indicate that the affinity of ivabradine binding to the blocking site is higher in the closed than in the open state of the channel. This feature does not contrast with the property of "open channel block", since it simply reflects the need of channels to be open for drug molecules to reach their binding site. In fact, this functional property corresponds to our model predictions whereby the drug cannot reach, nor leave, the docking site when the channel is in the closed state. Thus, the block mechanism can be summarized as follows: 1) ivabradine needs an open channel to access its binding site; 2) open channels, depolarization, and outward current flow favour access to the blocking site by means of a "current-dependent" mechanism; 3) when the channel is closed, the drug molecule is "trapped" in the cavity and cannot be displaced by either inward or outward flow; trapping depends strongly upon side chains of the Y506 and I510 residues. Trapping of blocking molecules has been described already for the binding of ZD7288 to mHCN2 channels [24]. Although the mechanisms of block of ZD7288 on mHCN2 and of ivabradine on hHCN4 are clearly not identical, some specific similarities exist. For example, similar effects are observed when the structurally homologous residues I510 (hHCN4) and I432 (mHCN2) are replaced by alanine. On the other hand, also important differences exist. For example, Y506 (hHCN4) is involved in determining the steady-state hHCN4 block by ivabradine, but the corresponding mHCN2 Y428 has no effect on steady-state mHCN2 block by ZD7288. Overall these data suggest that while the blocking sites of both drugs reside in the channel cavity, the detailed molecular interactions with surrounding residues differ substantially in hHCN4 and mHCN2. This agrees with the notion that ZD7288 block is not current-dependent [10]. Figure S1 Membrane expression of L477A mutant channels. Videoconfocal images of HEK293 cells transfected with hHCN4 WT (top panels) and L477A mutant cDNA (bottom panels), and immunolabelled with anti hHCN4 antibodies (red). In both cases a strong membrane-associated fluorescence is detected, indicating that the protein localizes to the membrane. The lack of current expression in the L477A mutant channel (see Table S1) may therefore indicate that substitution of L477 functionally impairs the ability of the channel to carry the current. Each image represents the scanning of a single video-confocal section. Nuclei labeled with DAPI. (TIF) Figure S2 The Y506 side chain in F509A is rotated relative to wild-type channels. View of the interior of the hHCN4 wt (gray) and F509A mutant channels (orange) in the closed form. Sidechains of Y506 and F509 in the wt, and of Y506 and A509 in the mutant F509A channels are shown as ball-and-stick. In magenta is the best pose of docked ivabradine for the F509A mutant. (TIF) Materials and Methods S1 Methods for immunolabeling. (DOC) Supporting Information Table S1 Biophysical properties of hHCN4 WT and mutant channels expressed in HEK293 cells. V 1/2 , voltage of halfmaximal activation; s, slope factor of activation curve; t act , activation time constant measured at 2140 mV (single exponential fit); t deact , deactivation time constant measured at +5 mV (single exponential fit); *P,0.05 vs WT channels.
2016-05-04T20:20:58.661Z
2013-01-04T00:00:00.000
{ "year": 2013, "sha1": "39f10e16ff2b98d0096aef4757b166cb49be3886", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0053132&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39f10e16ff2b98d0096aef4757b166cb49be3886", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
141510737
pes2o/s2orc
v3-fos-license
Current Role of Lipoprotein Apheresis Purpose of Review Lipoprotein apheresis is a very efficient but time-consuming and expensive method of lowering levels of low-density lipoprotein cholesterol, lipoprotein(a)) and other apoB containing lipoproteins, including triglyceride-rich lipoproteins. First introduced almost 45 years ago, it has long been a therapy of “last resort” for dyslipidaemias that cannot otherwise be managed. In recent years new, very potent lipid-lowering drugs have been developed and the purpose of this review is to define the role of lipoprotein apheresis in the current setting. Recent Findings Lipoprotein apheresis still plays an important role in managing patients with homozygous FH and some patients with other forms of hypercholesterolaemia and cardiovascular disease. In particular, patients not achieving treatment goals despite modern lipid-lowering drugs, either because these are not tolerated or the response is insufficient. Recently, lipoprotein(a) has emerged as an important cardiovascular risk factor and lipoprotein apheresis has been used to decrease lipoprotein(a) concentrations in patients with marked elevations and cardiovascular disease. However, there is considerable heterogeneity concerning the recommendations by scientific bodies as to which patient groups should be treated with lipoprotein apheresis. Summary Lipoprotein apheresis remains an important tool for the management of patients with severe drug-resistant dyslipidaemias, especially those with homozygous FH. Introduction A detailed description of the history and development of the extracorporeal removal of plasma cholesterol was published in a previous issue of Current Atherosclerosis Reports [1]. This introduction provides a brief account of the evolution over the past 50 years of the various procedures now collectively termed lipoprotein apheresis. The initial stimulus to undertake a radical approach to lowering plasma cholesterol was the intractable nature and severity of the increase in low-density lipoprotein (LDL) cholesterol that characterised homozygous familial hypercholesterolaemia (FH) and resulted all too often in premature death from atherosclerotic cardiovascular disease. The only LDLlowering drugs available in the 1960s, nicotinic acid and cholestyramine, were ineffective in this situation so Myant [2] and De Gennes et al. [3] resorted to manual plasmapheresis. This lowered plasma cholesterol but was too slow and labour intensive for prolonged use. However, in 1975, Thompson et al. [4••] overcame these drawbacks by using a continuous flow blood cell separator to repetitively undertake unselective plasma exchange in 2 FH homozygotes. Subsequently, Stoffel et al. [5••] introduced selective removal of LDL by using a cell separator to perfuse plasma through an immunoadsorbent column. The latter procedure is still available [6] but has been largely superseded by methods involving perfusion of plasma or whole blood through affinity columns containing either dextran sulphate covalently linked to cellulose beads [7][8][9] or polyacrylate-coated polyacrylamide beads [10]. Like immunoapheresis, these bind the apolipoprotein B component of LDL and lipoprotein(a) (Lp(a)) and thus remove from the circulation these lipoproteins and their cargo of cholesterol. A radically different approach to lipoprotein apheresis involves the extracorporeal precipitation of LDL through the addition of heparin to plasma, the so-called HELP system [11]. Precipitation of LDL occurs without addition of cations if the pH is lowered sufficiently, the precipitate being removed by filtration. Another method of removing lipoproteins from plasma is double filtration plasmapheresis (DFPP) [12,13]. In this procedure, plasma is separated from blood cells by a hollow membrane filter and then perfused through a second filter which selectively retains smaller plasma components like highdensity lipoprotein (HDL) and albumin, but discards larger molecular weight components including LDL and Lp(a). Acute decreases in LDL-cholesterol after each procedure range from 60 to 80%, depending upon the volume of blood or plasma treated. Although it lowers LDL-cholesterol to a similar extent, DFPP removes more HDL cholesterol than other methods. Haemoperfusion systems are the easiest to use but, like dextran sulphate-based plasma adsorption methods, employ disposable columns and are therefore more expensive than immunoadsorption, which utilises re-usable columns. Dextran sulphate-based methods are probably the most popular and are remarkably safe [14]. A comparison in FH homozygotes of dextran sulphate adsorption and HELP apheresis in Canada showed that the former lowered LDL-cholesterol to a greater extent than the latter (70.5% vs 63%, P = 0.02) mainly because it enables a greater volume of plasma to be treated [15]. A recent international survey of the management of FH found that lipoprotein apheresis was available in approximately 60% of 63 countries worldwide [16], cost being a limiting factor. In Germany, where lipoprotein apheresis is reimbursed by the health care system, almost 1300 patients received this treatment at 68 centres between 2012 and 2015 [17]. The clinical indications, current guidelines and evidence base for the efficacy of lipoprotein apheresis in the treatment of patients with severe hyperlipidaemia are discussed in the subsequent sections of this review. Current Indications and Recent Guidelines for Lipoprotein Apheresis Although most lipid guidelines mention lipoprotein apheresis as a therapy of last resort, they differ significantly in defining which patients to treat and under what circumstances [18]. This reflects a lack of convincing outcome trials as most of the evidence supporting the use of lipoprotein apheresis comes from retrospective analyses or extrapolation of intervention studies using lipid-lowering drugs. Since lipoprotein apheresis effectively decreases the plasma concentration of LDL, lipoprotein(a) and triglyceride-rich lipoproteins, it can be hypothesised that lipoprotein apheresis could be used in a number of different clinical settings. Currently, lipoprotein apheresis is mainly used in two different clinical settings (Table 1): With respect to elevated LDL-cholesterol (LDL-C), there is agreement that patients with homozygous FH inadequately responsive or refractory to lipid-lowering drugs qualify for such treatment [19][20][21][22][23][24][25]. Guidelines also agree that in homozygous FH, apheresis therapy should be started as early as possible, preferably in early childhood. The situation is less clear-cut for patients with heterozygous FH or other forms of hypercholesterolaemia (Table 1). For example, in the USA, apheresis is approved for severe LDLhypercholesterolaemia which persists despite maximal drug therapy (LDL > 300 mg/dl (7.8 mmol/L) without concomitant cardiovascular disease or > 200 mg/dl (5.2 mmol/L) with concomitant cardiovascular disease) [20]. In Germany, apheresis for elevated LDL-C can be performed if, despite maximal possible drug therapy, LDL-C cannot be reduced sufficiently. No specific threshold is given because the overall risk profile of each patient needs to be considered [22]. In Japan, apheresis is indicated in heterozygous FH if total cholesterol remains above 250 mg/dl (6.5 mmol/L) despite maximal drug therapy [21]. Thus, generally speaking, apheresis can be considered in hypercholesterolaemia other than homozygous FH if atherosclerotic vascular disease is present and progressive and if LDL-C treatment goals, which vary from country to country, are not met despite maximal possible drug therapy (including proprotein convertase subtilisin/ kexin type 9 (PCSK9) inhibitors). Although lipoprotein(a) is a causal risk factor for atherosclerotic disease and although there are only limited means to treat elevated lipoprotein(a) (Lp(a)) levels, the role of lipoprotein apheresis in this context is not well defined. The National Lipid Association and Heart-UK consider elevated Lp(a) as an additional risk factor that should be taken into account when deciding whether lipoprotein apheresis should be used to treat elevated LDL-C [20]. Elevated Lp(a) per se is therefore not an indication. In contrast, in Germany, elevated Lp(a) levels are considered to be an indication for regular apheresis if certain prerequisites are fulfilled, namely if Lp(a) is > 60 mg/dl in patients with progressive cardiovascular disease despite optimal management of all other risk factors including LDL-C [17]. Some of the other guidelines do not mention the role of lipoprotein apheresis for treating patients with elevated Lp(a). Although lipoprotein apheresis also decreases the concentration of triglyceride-rich lipoproteins, none of the guidelines specify the circumstances under which hypertriglyceridaemia should be treated with lipoprotein apheresis. Theoretical and Practical Considerations Governing the Optimum Frequency and Efficacy of Lipoprotein Apheresis Procedures It has long been accepted that the production rate of LDL, the lipoprotein that transports over 90% of plasma cholesterol in FH homozygotes, obeys zero order kinetics, i.e. it remains constant irrespective of pool size, whereas catabolism of LDL is governed by first-order kinetics, i.e. the fractional catabolic rate (FCR) is constant irrespective of pool size [26]. There is a steep fall in plasma total and LDLcholesterol immediately after apheresis and then a curvelinear rebound back to the baseline level, the speed of which is largely determined by the FCR of the lipoprotein particle in question. For LDL, this depends upon inherent LDL receptor activity plus the influence of any lipid-lowering drugs on the latter, such as statins. The magnitude of the acute decrease in lipoproteins after apheresis depends upon the volume of plasma treated, treatment of 1.2 plasma volumes (approximately 4 l) resulting in a reduction of 70% below the baseline value. As shown in Fig. 1, the subsequent rebound in plasma cholesterol is fastest in normal subjects and slowest in FH homozygotes, with heterozygotes intermediate. The actual value of total or LDLcholesterol (Ct) at any given time (t) after apheresis can be calculated from the formula: where C0 is the baseline value, Cmin is the post-apheresis value, and k is the FCR [27]. For example, in the homozygote in Fig. 1, if the baseline level of cholesterol of 15 mmol/l is acutely reduced by 70% and the FCR is 0.1, the post-apheresis levels at 1 and 2 weeks will be respectively 35% and 17% below the baseline level. Similarly in the heterozygote, if the baseline level of 7 mmol/l is reduced by 70% and the FCR is 0.2, then post-apheresis levels at 1 and 2 weeks will be 17% and 4% below the baseline level. Hence, apheresis every 2 weeks has a modest cholesterol-lowering effect in homozygotes but virtually none in heterozygotes, in whom weekly apheresis is necessary for any significant effect. This is exemplified by the Familial Hypercholesterolaemia Regression Study, where bi-weekly apheresis (because of operational constraints) plus simvastatin lowered LDL-cholesterol only marginally more in FH heterozygotes than did a bile acid sequestrant plus simvastatin [28]. A similar approach can be used to describe Lp(a) rebound following apheresis. A direct comparison indicates that lipoprotein(a) rebounds at a slower rate than LDL but with a similar monoexponential function [29]. Therefore, if apheresis is performed weekly, Lp(a) concentration will not rebound to its original (pre-first apheresis) value and therefore, a lower level will be achieved. Plasma levels of LDL and Lp(a) decrease further if apheresis is repeated on a regular basis but stabilise when a new non-steady state is reached. In terms of clinical relevance, the best index of efficacy is probably the interval mean between successive procedures, which can be calculated by integrating the area under the rebound curve or using a modified version of the formula of Kroon et al. [30,31]. Patients' compliance is another important factor in determining the long-term benefit of lipoprotein apheresis. A recent survey in France showed an overall compliance rate of nearly 90%, non-compliance being evident mainly in patients undergoing weekly apheresis [32]. Recent Evidence of Therapeutic Benefit of Lipoprotein Apheresis via Lowering of a. Low-Density Lipoprotein Data from the German Lipoprotein Apheresis Registry (GLAR), based on over 15,000 apheresis procedures, showed a median acute reduction in LDL-cholesterol of 69% and of Lp(a) of 70% in hyperlipidaemic patients with cardiovascular disease [17]. These reductions were associated with a 97% decrease in the incidence of major adverse coronary events (MACE) during the first year of lipoprotein apheresis compared with the 2 years preceding the start of this treatment. These data were obtained prior to the introduction of PCSK9 inhibitors in Germany. In the ODYSSEY ESCAPE trial, treatment of FH heterozygotes undergoing lipoprotein apheresis with the PCSK9 inhibitor alirocumab resulted in an additional 54% reduction in LDL-cholesterol. Based on the trial criterion of reducing LDL-cholesterol by ≥ 30% below the baseline value on apheresis, 63% of patients on alirocumab were able to discontinue apheresis altogether and over 90% to halve its frequency [33••]. In contrast, in FH homozygotes on apheresis in the TAUSSIG study, reductions in LDL-cholesterol after the addition of the PCSK9 inhibitor evolocumab averaged 23% [34•]. This was much less than that observed in FH heterozygotes on evolocumab [35] and the decrease in Lp(a) was also less, only 12%. One of the factors influencing the LDL-lowering response of homozygotes to evolocumab was LDL receptor status. The 10% who were receptor negative showed only a 6% decrease in LDL whereas those who were receptor defective had a 24% decrease. Although PCSK9 inhibitors clearly have considerable potential as an alternative to apheresis in the treatment of patients with statin-refractory heterozygous FH, their usefulness in homozygous FH and in patients with raised Lp(a) levels is less obvious and in most instances, they will complement rather than replace lipoprotein apheresis. Another adjunctive drug, whose action is independent of LDL receptor status, is the microsomal triglyceride transfer protein (MTP) inhibitor lomitapide, which reduces LDLcholesterol by 50% in FH homozygotes [36]. Its efficacy is similar irrespective of whether such patients are or are not on lipoprotein apheresis and in about 50% of instances, it reduced their LDL-cholesterol to < 2.5 mmol/l (96 mg/dl) [37]. However, its long-term safety remains under scrutiny. The apoB synthesis inhibitor mipomersen was used in another study in patients undergoing regular apheresis but did not result in a decreased apheresis frequency and was associated with a high incidence of side effects [38]. A frequent cause of morbidity and mortality in homozygous FH is atherosclerosis of the aortic root. The location and severity of atheroma at this site in FH homozygotes, but not in heterozygotes, is identical to that seen in cholesterol-fed rabbits and is attributable to the severity of hypercholesterolaemia in both these situations [39]. A 50-year long survey of UK patients found that the occurrence of aortic stenosis was lower in patients who started treatment during the 1990s as opposed to those treated in the pre-statin era (33% vs 77%, P = 0.02), reflecting better control of serum cholesterol by apheresis and statins [40]. A French study of children with homozygous FH showed that the frequency of aortic stenosis and need for surgery were associated with the age at which lipoprotein apheresis was initiated [41]. Those with aortic root atheroma started apheresis at age 10 whereas those without atheroma had started it earlier, at age 5. Recent evidence that effective lipid-lowering therapy increases life expectancy came from a retrospective survey of 133 homozygotes in South Africa and the UK who were divided into quartiles according to their on-treatment levels of serum cholesterol from 1990 to 2014 [42•]. Patients in quartile 4, with an on-treatment serum cholesterol > 15 mmol/l (584 mg/dl), had a hazard ratio of 11:5 for total mortality compared with those in quartile 1, with an on-treatment cholesterol of < 8 mmol/l (313 mg/dl). Those in quartiles 2 and 3 combined, with an on-treatment cholesterol of 8 (313 mg/dl) -15 mmol/l (584 mg/dl), had a hazard ratio of 3:6 compared with quartile 1. These differences were statistically significant (P < 0.001) and remained so after adjustments for confounding factors (P = 0.04). Significant differences between quartiles were also evident for cardiovascular deaths and MACE. It is noteworthy that 50% of UK patients were on apheresis versus 13% of South African patients and that 60% of the former but only 19% of the latter were in quartile 1, reflecting the fact that reductions in total cholesterol in patients on apheresis averaged 57% in the UK versus 32% in South Africa (P = 0.01). This study provides strong evidence that the extent of reduction of serum cholesterol achieved by a combination of therapeutic measures, including lipoprotein apheresis, statins, ezetimibe, and evolocumab, is a major determinant of survival in homozygous FH. As stated in an accompanying editorial, at long last, there is "light at the end of the tunnel" for homozygous FH [43]. Recent Evidence of Therapeutic Benefit of Lipoprotein Apheresis via Lowering of b. Lipoprotein(a) An elevated Lp(a) level is an independent risk factor for atherosclerosis [44]. Thus, it can be expected that lowering Lp(a) levels translates into clinical benefit. It is however unclear how much Lp(a) must be decreased to achieve significant risk reduction. In a recent study based on genetic data, it was hypothesised that a decrease in Lp(a) concentration of > 100 mg/dl is required to achieve a benefit equivalent to 1 mmol/l (39 mg/dl) of LDL-cholesterol lowering [45]. On the other hand, data from the Odyssey Outcomes trial indicate that much less reduction in Lp(a) was beneficial (1 mg/dl reduction resulted in 0.6% relative risk reduction; thus, about 35 mg/dl lipoprotein(a) reduction would lead to the same risk reduction as 1 mmol/L (39 mg/dl) of LDL-C reduction) [data only published in abstract form]. The topic is further complicated by the fact that so far there are no drugs available that solely decrease Lp(a) concentrations. Niacin decreases Lp(a) but has also multiple other effects on lipoproteins and was not shown to have clinical benefit [46]. PCSK9 inhibitors can also decrease Lp(a) concentrations but also (and primarily) decrease LDL-cholesterol, which makes it difficult to decide how much of the clinical benefit relates to LDL-cholesterol reduction and how much to lipoprotein(a) reduction. Similarly, most lipoprotein apheresis methods decrease both LDL and Lp(a) concentrations, again making it difficult to dissect out the effect of lipoprotein(a) reduction. In addition, there are no adequate clinical endpoint trials evaluating the effect of apheresis in patients with elevated Lp(a). As discussed earlier, an older trial evaluated whether in patients with CAD and heterozygous FH (n = 39) bi-weekly apheresis in combination with simvastatin (40 mg/day) is superior to simvastatin (40 mg/day) in combination with colestipol (20 g/day) [28]. After 2.1 years, there was no significant difference between the two groups and therefore, the authors concluded that "decreasing Lp(a) seems to be unnecessary if LDL-C is reduced to 3.4 mmol/l (132 mg/dl) or less". However, patients had a low Lp(a) baseline level and only a modest Lp(a) reduction with apheresis (mean reduction 10 mg/dl). In a subsequent angiographic trial, it was evaluated whether a specific Lp(a) apheresis (Lipopac apheresis) plus statin reduces CHD progression compared to statin alone in 30 patients with CHD and elevated lipoprotein(a) (> 50 mg/dl) [47]. After 18 months, apheresis-treated patients showed significantly more regression and less progression. Again, this trial was limited by a small number of subjects and the lack of reporting of clinical events. Recently, an analysis of the German Lipoprotein Apheresis Registry for the period 2012-2015 was reported and showed acute reductions of LDL-cholesterol and Lp(a) of 68.6% and 70.4% respectively [17]. The data showed a dramatic reduction (− 97%) of cardiovascular events when the period before initiation of apheresis was compared to the period of regular apheresis. This very impressive reduction must be interpreted with caution as the setting is not randomised or controlled. Another publication showed significant reduction of interventions in patients with peripheral artery disease after initiation of apheresis (observational data) [48]. This recent analysis confirms previous German evaluations and also a study from Italy evaluating cardiovascular events before initiation of apheresis and during regular apheresis therapy [49, 50•, 51, 52]. In two of the German studies, only subjects with isolated Lp(a) elevation were included (with LDL < 2.5 mmol/l (97 mg/dl) on statin therapy) [51,52], while in the third study, patients with concomitantly elevated LDL-cholesterol were also included [50•]. The event rate decreased in all 4 studies dramatically after initiation of regular apheresis but these observations are severely limited by the lack of a control group. Progression of disease and recurrent events are the main reasons for starting a patient on apheresis. Thus, it is not surprising to observe a very high event rate in the time period before regular apheresis. As outlined elsewhere, it is impossible to confirm the true effect of apheresis without an adequate control group [53,54]. Recent Evidence of Therapeutic Benefit of Lipoprotein Apheresis via Lowering of c. Triglyceride-Rich Lipoproteins Severe hypertriglyceridaemia (> 10 mmol/l; ca 900 mg/dl) due to increased levels of very low-density lipoprotein (VLDL), chylomicrons and remnant particles is a recognised cause of acute pancreatitis. In these circumstances, plasma exchange with a centrifugal cell separator enables triglyceride levels to be drastically reduced with a rapid resolution of abdominal pain [55]. Hitherto there has been no evidence that this approach reduces morbidity and mortality [56] but recently, Chang et al. [57] showed that in patients with extreme hypertriglyceridaemia (> 56 mmol/l; ca 4900 mg/dl) and acute pancreatitis, treatment with DFPP halved the duration of hospitalisation compared with patients receiving conventional therapy. However, it remains uncertain whether apheresis can reduce mortality in this situation [58]. Future Prospects for Lipoprotein Apheresis in the Light of Recent Advances in Lipid-Lowering Drugs Lipoprotein apheresis is not only a modality that has enabled patients with severe hypercholesterolaemia and elevated lipoprotein(a) to be treated but it has also been used as a tool for the better understanding of the regulatory processes involved in lipoprotein metabolism and has thereby advanced knowledge [29,[59][60][61][62]. In recent years, new drugs have been brought to the market that effectively treat many patients with severe hypercholesterolemia without resorting to apheresis. The availability of PCSK9 inhibitors decreases the necessity for apheresis dramatically as most patients with heterozygous FH and other forms of hypercholesterolaemia respond very well to this therapy [33••]. From the "LDL-perspective", only patients with homozygous FH and a limited number of patients with severe forms of heterozygous FH or patients intolerant to any form of lipid-lowering drugs remain potential candidates. With further drugs such as ANGPTL3 inhibitors and bempedoic acid being developed, this group may decrease further [63]. Similarly, potent drugs are being developed for decreasing Lp(a). Of particular interest is an antisense oligo nucleotide that can decrease Lp(a) by more than 70%, which is much greater than the observed interval mean reduction during regular apheresis [64•]. Assuming safety, it can be anticipated that these drugs will have a similar effect on apheresis for elevated Lp(a) as had PCSK9 inhibitors on apheresis for elevated LDLcholesterol and they will eventually further decrease the number of patients requiring apheresis. However, we should keep one thing in mind: patients treated by regular apheresis have the advantage of being seen by the same medical team on a very regular (weekly or biweekly) basis. This tight control and guidance improves compliance (generally speaking) and allows medical issues to be discussed regularly in a familiar setting. Although this effect is hard to quantify, it would be surprising if it did not also affect the cardiovascular event rate. Obviously, drug therapy gives the patient more "freedom" but maybe at the cost of less strict medical surveillance. Conclusions Even after recent dramatic improvements in drugs affecting lipid metabolism, lipoprotein apheresis still has its role in treating patients with certain dyslipidaemias. While most patients with heterozygous FH or other forms of elevated LDLcholesterol can now be treated with drugs, apheresis remains a therapy of last resort in those not responding or intolerant to drugs and is still the gold standard for patients with homozygous FH. It is not only very efficient in decreasing LDLcholesterol but also very safe and, unlike lomitapide, it can be used in children. In addition to its role in treating severe forms of LDL-hypercholesterolaemia, it is also used in patients with severe elevations of Lp(a) and atherosclerotic disease, although its role in this situation is less well defined. While the number of patients requiring apheresis will probably decrease as new drugs are developed, it will remain a therapy to be kept in reserve for certain types of patient. Compliance with Ethical Standards Conflict of Interest Gilbert Thompson declares no conflict of interest. Dr. Parhofer reports personal fees from Aegerion, Akcea, Amgen, Boehringer-Ingelheim, and MSD. Dr. Parhofer also reports grants from Novartis, grants and personal fees from Regeneron, grants and personal fees from Sanofi, all outside of the selected work. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2019-05-02T13:03:04.266Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "d23763dd6418ff7618f2208cad17a8d2c2e4ea72", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11883-019-0787-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d23763dd6418ff7618f2208cad17a8d2c2e4ea72", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
149962719
pes2o/s2orc
v3-fos-license
The Residual Potential of Bottom Water Reservoir Based upon Genetic Algorithm for the Relative Permeability Inversion At present in the offshore oilfield, X oilfield had successfully developed bottom water reservoir with horizontal well. The development mode of single sand body of horizontal well caused water cut rose rapidly and irresistible bottom water coning. The common empirical formula of recoverable reserves obtained through statistical analysis was not applicable to the bottom water reservoir. Under the condition of as high as tens of thousands of time local scour multiple, underground seepage law had been changed. In order to improve the understanding of the remaining potential of bottom water reservoir in the ultra-high water cut stage, this research innovation proposed to carry out the flow tube simulation of the bottom water ridge. In addition, combining with the theoretical research of the quantitative characterization of the water ridge form and the vector permeability, theoretical model was established. At last, the phase permeability curve was calculated from the production data of the ultra-high water cut stage of the bottom water reservoir by combining the genetic algorithm. According to the change of water ridge and oil saturation, the mechanism of end point change of phase permeability curve was expounded, and the effective production radius of water drive oil in bottom water reservoir was put forward, which provided the basis for understanding the potential of oil field and tapping the potential in the future. Introduction The bottom water reservoir of x oilfield had the characteristics of high crude oil viscosity, low oil column height and sufficient bottom water energy. The development mode of single sand body of horizontal well was adopted, which caused water cut rise rapidly in horizontal production wells and bottom water coning irresistible. When the oil well entered the ultra-high water cut stage, the sweeping volume tended to be stable, and the main way to improve oil recovery rate was to improve oil displacement efficiency. At present, x Oilfield Group was in the stage of ultra-high water cut and 68% of production wells had water cut more than 90%. The dynamic characteristics of production wells showed that the water cut of horizontal wells in the bottom water reservoir rised rapidly, the recoverable reserves were mainly produced after high water cut period, and the cumulative oil production in the stage of extra-high water cut accounts for 60% -70% of the recoverable reserves. Therefore, the development characteristics of the bottom water reservoir showed that the extra-high water cut stage was an important development stage of this reservoir type. At present, the problem was that the common empirical formula of recovery rate in oilfield was mainly obtained through statistical analysis, and all calculation parameters were applicable within a certain range (Permadi & Jayadi, 2010). However, the viscosity, permeability and well pattern density of crude oil in x oilfield were not applicable to the empirical formula, so there was no reliable conclusion on recovery rate (Kuchuk, 1988). In the actual development process of the oilfield, the local scour multiple index was as high as tens of thousands (Gao, Jiang, Wang, et al., 2016;Zheng, 1993). With the increase of pore volume injection multiple, the oil displacement efficiency in the swept area could be further improved, due to large differences of residual oil saturation acquired by laboratory test and actual condition. Under the condition of high scour, the end point of residual oil saturation must shift to the right, so the relative permeability curve needs to be further modified, and the final understanding of water drive efficiency needs further theoretical study. Theoretical Model Establishment and Analysis of Bottom Water Reservoir Quantitative Characterization of Bottom Water Ridge The flow of horizontal wells in bottom water reservoir was shown in Figure 1. The hypothesis was: 1) Oil-water two-phase fluid seepage in reservoir; 2) Uniform reservoir thickness; 3) Reservoir fluid percolation conforms to Darcy's law; 4) Reservoir rocks were anisotropic and heterogeneous; 5) Regardless of capillary pressure; 6) Regardless of gravity (Fan, 1993;Zheng, Xu, & Chen, 2013). According to the basic formula of the balance principle of statics: The relationship between the height of water ridge and the radius of water ridge could be obtained through the corresponding differentiation and integration: By integrating, the following formula was proposed: Through the above formula, the height under different water ridge radius could be calculated, as well as the distribution and remaining oil quantity. Vector Characteristics and Calculation Method of Directional Reservoir Permeability Research and production practice showed that permeability had obvious directionality, so permeability was vector (Xin, 2011;Li, Zhou, Xiong, et al., 2015). The directional difference of permeability will directly affect the well arrangement, well spacing and the technical design of various development measures such as artificial fracturing. In view of the above important role of vector permeability, based on previous studies, this paper introduced the reasons for the vector characteristics of rock permeability, and analyzed some understandings that were easy to be confused when understanding and applying the vector characteristics of permeability (Liu & Hu, 2011, Chen & Tao, 1997Bing, 2012). As shown in Figure 2, OB was the azimuth reference line (the azimuth of OB was 0˚, generally taking the due east direction as the azimuth reference line); in the derivation process, the X and Y axes of the coordinate system were consistent with the direction of the extreme permeability, and the extreme permeability in the x-axis direction was K mx . The extreme permeability in y direction was K my ; Derivation of quantitative calculation model of vector permeability. Using K n to express permeability in n direction; Fluid seepage velocity in the direction of V n : μ was the viscosity of the fluid, according to Darcy's law. Then the flow through seepage section A is: As shown in Figure 3, Use A x to express the effective seepage area of section A in z direction (that was the seepage area perpendicular to x direction); n α was the azimuth of direction n; θ represents the azimuth of the extreme permeability K mx , that is, the azimuth of the x-axis. Because the two sides of the β angle and the two sides of the n α − θ angle were perpendicular to each other, so β be equal to n α − θ , so: Similarly, the effective seepage area of interface a in Y direction (that was, the seepage area perpendicular to Y direction was The component n P ∇ in the X direction was: Under the action of nx P ∇ , the seepage velocity of fluid passing through section a in X direction was: You could get it by combining The above formula was the quantitative calculation model of anisotropic permeability. Where K n was the permeability in n direction; K mx was the extreme permeability in the x-axis direction; K my was the extreme permeability in the y-axis direction; n α was the azimuth of direction n; θ was the azimuth of the extreme permeability K mx . Distribution Pattern and Grid Division of Stream in Water Ridge Area of Bottom Water Reservoir The gas oil ratio was low in X Oilfield, the water ridge profile of horizontal well could be simplified as oil-water two-phase flow chart, as shown in Figure 3. After the production well entered the ultra-high water cut period, the sweep area was stable. The flow pipe model could be used to simulate the oil-water two-phase flow in the sweep area through the quantitative characterization of water ridge shape (Sun, Zhou, Hu, et al., 2018, Yan, Li, Yin, et al., 2009Chen, Bai, Lu, et al., 2015). The flow tube model assumed that the immiscible displacement process still simulation mainly included the solution of forward propulsion equation, the solution of total flow, and the solution of the relationship between total flow and the timing. According to the two-dimensional seepage theory, the streamline distribution of stable condition was obtained. Assume that the streamline was constant, the displacement path of bottom water was obtained. In the process of displacement, the displacement space of bottom water was expanding along the horizontal direction. In the flow tube model established by Higgins and Leighton, the flow tube was divided into N equal volume grids. For each production step, displacement process of simulation by advancing saturation of water drive front, until the water breakthrough occurs in the n grid. Finally, by combining the result of different flow tubes in the same injection time value, we could get the total dynamic state of all flow tubes in the same injection production well. The solution steps of Higgins-Leighton method were as follows: the streamline and flow pipe distribution was determined in single-phase flow, for each flow tube, cumulative water injection multiple was given; Calculating the saturation distribution and average apparent viscosity distribution corresponding to each flow tube; Calculating the total flow volume in the flow pipe; Calculating the corresponding time value; Summarizing the results of each flow pipe in the same injection time value for getting the total flow dynamics. According to the streamline distribution of the bottom water area, equal volume grid was the principle of dividing. Realization of Bottom Water Drive Model Calculation Program The model was calculated by MATLAB software, this software was a high level programming language widely used in the field of engineering calculation and numerical analysis. It had strong numerical calculation function, which was easy to learn and easy to operate. The software was divided into one core solver and four modules, which included that data processing module, equal volume mesh module, water drive front saturation tracking module, post-processing module, data input and processing module. The Form of the Permeability Curve Expressed Usually, the phase permeability curve was expressed as follows: where, S w , water saturation; S wi , irreducible water saturation; S or , residual oil saturation; _ ro wi k S , the initial maximum oil relative permeability; constant C w , The Method of Genetic Algorithm (GA) Genetic algorithm was a computational model simulating the natural selection and genetic mechanism of Darwinian evolution. It was a method to search the optimal solution by simulating the natural evolution (Yin, Zhao, Dong, et al., 2012). Taking small bubbles of different colors and sizes as examples, in order to find bubbles of specific colors and sizes, it was necessary to optimize the initial number of small bubbles (survival and elimination), and then produce a new group of small bubbles after genetic calculation until the optimal solution was found. In this paper, the genetic algorithm was coupled with the bottom water drive model. Different characteristic parameter of relative permeability curve was generated through initial population, and theoretical curves were calculated by the theoretical model between water cut and recovery degree. The optimal solution with minimum of forecast error was determined compared with the actual production data, as shown in Figure 4. Calculation Results of Genetic Algorithm Coupled with Theoretical Model of Bottom Aquifer Drive The residual oil saturation was 0.05 by the optimized infiltration curve, as shown in Figure 5, which was lower than the original oil saturation. This indicates that the oil displacement efficiency in the affected area can be greatly improved depending on the high PV in the ultra-high water cut stage of the bottom water reservoir. Further analysis shows that the optimized curve reflects the stronger hydrophilic property of the reservoir, as a result of isopermeability point shifts to the right. Distribution Characteristics of Micro Remaining Oil in Bottom Water Reservoir Through the bottom water drive model, the distribution rule of remaining oil under the condition of constant pressure difference/constant flow rate can be realized. From Figure 6, it can be seen that the bottom water wave and area within the production range of oil layer present the "inverted V" shape, and the production well had the fastest water flow in the vertical direction, which was the dominant mainstream line, and its flow was far greater than the edge area. It could be seen from the figure that there were obvious differences in oil displacement efficiency in the affected area. The vertical direction of production wells was reflected as "dominant channel", and the scour multiple was higher than 1000 times. It could be seen from the figure that under the high porosity volume injection multiple, the oil displacement efficiency was the highest. Through this model, the quantitative characterization of the remaining oil spatial difference could be realized. Effective Production Radius of Water Drive in Bottom Water Reservoir Based on the quasi streamline method, the bottom water drive model of oil-water two-phase homogeneous horizontal well was established, which could realize the quantitative characterization of bottom water wave and remaining oil in the area, under the condition of constant pressure difference and constant flow rate, as well as the difference analysis of parameters such as injection multiple of pore volume and displacement efficiency in the affected area. On this basis, the concept of effective production radius of bottom water drive reservoir was put forward for the first time. Minimum distance of interference between wells was obtained. In addition, the theoretical template of effective production radius of horizontal well under different fluid properties and water avoiding height was given, as shown in Figure 7, which further enriches the development boundary of bottom water reservoir and provides guidance for further development and adjustment of oilfield. Conclusion 1) Combined with Genetic algorithm and the flow tube simulation of the bottom water, the phase permeability curve was calculated from the production data of the ultra-high water cut stage for the bottom water reservoir. 2) The residual oil saturation by the optimized infiltration curve was lower than the original oil saturation, indicating that the oil displacement efficiency in the affected area can be greatly improved depending on the high PV in the ultra-high water cut stage of the bottom water reservoir. 3) Through the bottom water drive model, the distribution rule of remaining oil under the condition can be obtained, and the theoretical template of effective production radius of horizontal well under different fluid properties and water avoiding height was given, in order to provide guidance for further development and adjustment of oilfield.
2019-05-12T14:10:29.568Z
2019-04-24T00:00:00.000
{ "year": 2019, "sha1": "ba762e3e0f111bf5068a225fe8837382894c60c7", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=92010", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9713a77b13c5afc03b57e3b82657bc1854606022", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
88521358
pes2o/s2orc
v3-fos-license
Complexified diffeomorphism groups, totally real submanifolds and K\"ahler-Einstein geometry Let (M,J) be an almost complex manifold. We show that the infinite-dimensional space Tau of totally real submanifolds in M carries a natural connection. This induces a canonical notion of geodesics in Tau and a corresponding definition of when a functional, defined on Tau, is convex. Geodesics in Tau can be expressed in terms of families of J-holomorphic curves in M; we prove a uniqueness result and study their existence. When M is K\"ahler we define a canonical functional on Tau; it is convex if M has non-positive Ricci curvature. Our construction is formally analogous to the notion of geodesics and the Mabuchi functional on the space of K\"ahler potentials, as studied by Donaldson, Fujiki and Semmes. Motivated by this analogy, we discuss possible applications of our theory to the study of minimal Lagrangians in negative K\"ahler-Einstein manifolds. Introduction Let (M, J) be a 2n-dimensional manifold endowed with an almost complex structure. Given p ∈ M , we say an n-plane π in T p M is totally real if J(π) ∩ π = {0}, i.e. if T p M is the complexification of π. An n-dimensional submanifold L is totally real if T p L is totally real in T p M for all p ∈ L. This gives a decomposition Although totally real submanifolds are a natural object in complex geometry, they cannot be studied using purely complex analytic tools. They are, in a sense, the opposite of complex submanifolds; in fact, they are "maximally noncomplex", where maximal also refers to their dimension. Furthermore, the defining condition is an open one so their "moduli space" T is infinite-dimensional. It might seem reasonable to conclude that this class of submanifolds is too weak to carry interesting geometry. In this paper we will prove the contrary by initiating a study of the global geometric features of the space T . Further results in this direction appear in the companion paper [11]; other applications appear in [12]. Geodesics on T . Our first main result, described in Section 2.1, is that T admits a natural connection, inducing a notion of geodesics. In simpler language, we discover that there exists a notion of canonical 1-parameter deformations of a totally real submanifold L, in any given direction. This is rather striking: there is no analogue of this fact known in other spaces of submanifolds. In some sense this observation is the "global version" of the definition of totally real submanifolds, which says the "normal" space T p M/T p L and tangent space T p L are canonically isomorphic via J. In other words, the extrinsic and intrinsic geometry of L coincide; geodesics are, in a sense, the extrinsic analogue of the integral curves of tangent vector fields. A convex functional. The geodesics induce a notion of convex functionals f : T → R: specifically, those which are convex in one variable when restricted to each geodesic. A second striking fact is provided by the following example. Consider M = C, so that T is the space of curves: in this situation we prove that the standard length functional is convex in our sense. Interestingly, this turns out to be a reformulation of a classical result due to Riesz concerning certain convexity properties of integrals of the form r → |u(re iθ )| dθ, where u is a subharmonic function on an annulus. The length functional uses the metric on C, so in higher dimensions it is natural to focus on Kähler, more generally almost Hermitian, manifolds M and look for an analogous functional on T . A first guess might be the standard Riemannian volume functional but, in our context, this is rather unnatural because it does not encode the totally real condition. In the literature [2] one finds a second "volume functional", tailored specifically to totally real submanifolds. To understand this alternative functional, the key observation is that there exists a second, equivalent, definition of the totally real condition: L is totally real if and only if the pullback operation for forms defines an isomorphism K M|L ≃ Λ n (L; C). One can view this as another manifestation of the "extrin-sic=intrinsic" property of totally real submanifolds. When L is oriented it turns out that K M |L admits a canonical section. Integrating this (real) n-form on L defines the "J-volume functional", which agrees with the length functional in dimension 1 but is in general different to the Riemannian volume functional. Our second main result, stated in Section 5.4, is that, in the appropriate setting, this functional is convex in our sense. It is perhaps worth emphasizing that the notion of geodesics lies entirely within the realm of complex analysis: a priori, it has no relationship to Kähler geometry. Our convexity result thus reveals a new form of compatibility between complex and metric data. Applications to minimal Lagrangian submanifolds. This brings us to our study of the relationship between the J-volume and the Riemannian volume. The outcome is especially interesting when M is a Kähler-Einstein (KE) manifold with negative scalar curvature. Recall that an n-dimensional submanifold L in M is Lagrangian if the ambient Kähler form vanishes when restricted to L. Lagrangian submanifolds are a key topic in symplectic geometry. In the Kähler case it is particularly fruitful to study interactions between symplectic and Riemannian properties of L. For example, it is well-known that (i) in KE manifolds the mean curvature flow preserves the Lagrangian condition and (ii) in negative KE manifolds the minimal Lagrangians are strictly stable for the standard Riemannian volume. Fact (i) is the starting point for [11]. Our goal here is to further investigate fact (ii). Specifically, when M is negative KE we show the following. • The J-volume provides a lower bound for the standard Riemannian volume. The two functionals coincide on Lagrangian submanifolds. • The critical points of the J-volume are exactly the minimal Lagrangian submanifolds. It thus "weeds out" the additional critical points (non-Lagrangian minimal submanifolds) of the standard Riemannian volume. • The J-volume is strictly convex with respect to our geodesics. For a minimal Lagrangian this is the global counterpart of the aforementioned infinitesimal stability property. It is thus clear that the J-volume provides good control over minimal Lagrangians. No such convexity holds for the Riemannian volume functional. A moment map. The above results fit into a larger picture. Indeed, the geometric features of T resemble those of two other well-known infinite-dimensional spaces which appear in Kähler geometry: the integrable (0, 1)-connections on a Hermitian vector bundle E, i.e. the holomorphic structures on E, and the Kähler potentials in a given Kähler class. In both cases we have the following. • A canonical connection and notion of geodesics, related to an infinitedimensional group action and its formal "complexification". • A convex functional. • A moment map encoding the group action, whose zero set coincides with the critical point set of the functional. Following this lead, in Section 8 we show the geometry of T can be rephrased in terms of the formal complexification of the group of (orientation-preserving) diffeomorphisms of L and of a moment map induced by the J-volume functional. In particular, in the negative KE context it follows that minimal Lagrangians can be re-interpreted as the zero set of a moment map. Open problems. Our results naturally lead to questions about minimal Lagrangians and their relationship with the geometry of negative KE manifolds. In the analogous problem for Kähler potentials, the moment map serves to relate the existence of critical points of the functional to algebraic stability properties of the manifold, whilst the uniqueness of these points is related to the convexity of the functional. This formalism thus provides a useful understanding of the geometry of Fano manifolds, and was indeed one of the ingredients of the recently accomplished existence theory for positive KE metrics. By contrast, the existence of KE-flat (Calabi-Yau) metrics was solved by Yau in the 1970s. Currently, the main questions here are related to calibrated geometry, mirror symmetry and its applications to String Theory in Physics. Given that the existence of negative KE metrics was also solved in the 1970s, by Aubin and Yau, one might wonder what are the most interesting open questions in this context. Our results provide evidence that minimal Lagrangians are closely related to deep aspects of this geometry. They also show that the J-volume is a useful tool with which to probe this relationship. On a technical level, a key feature of the space of Kähler potentials was its amenability to analytic methods. This led (across 20 years) to a complete existence theory for geodesics and to the corresponding extension of convexity results. The main analytic question we set up in this paper is whether an analogous theory is possible for geodesics in T . In Section 3 we provide a reformulation of the geodesic equation in terms of families of J-holomorphic curves intersecting the initial submanifold L. In the holomorphic setting this helps elucidate key features of the equation, by allowing us to use standard techniques from one complex variable to build examples and counterexamples to the existence of solutions. It also provides a fairly complete understanding of the uniqueness problem for geodesics. It is clear however that the final answer to these questions will require substantial effort, on a different technical scale. More generally, it seems worthwhile investigating the properties of geodesics in relation to other classical problems in complex analysis. After this work was complete it was pointed out to us by László Lempert that the notion of geodesics, in the 1-dimensional case, had already been investigated in unpublished work by Birgen [1] in relation to Levi-flat hypersurfaces and polynomial hulls. Work in progress by Maccheroni [13] shows that the notion of geodesics also finds applications to the study of complex-analytic properties of minimal Lagrangians. A second significant problem is the existence and uniqueness of minimal Lagrangians in negative KE manifolds. As for Kähler potentials, existence may be related to a stability-type condition on the given data while our convexity result provides some information on global uniqueness properties, cf. Section 8. Other aspects of the uniqueness question are discussed in [8] and in [12]. The space of totally real submanifolds To start, let us make three initial choices: • a 2n-dimensional manifold (M, J) with an almost complex structure; • an oriented n-dimensional manifold L; • a totally real immersion ι : L → M . 1 It will be important to maintain the distinction between immersions of L and their corresponding images, i.e. "submanifolds". In general, a submanifold is an equivalence class of immersions, up to reparametrization via a diffeomorphism of L. Since the orientation of L will play a role, we are interested in a slightly more refined notion: an oriented submanifold is an equivalence class of immersions, up to reparametrization by orientation-preserving diffeomorphisms. The totally real condition is preserved under reparametrization, so it is welldefined on the space of (oriented) submanifolds. We now define our two main spaces of interest. • Let P be the space of totally real immersions of L into M which are homotopic, through totally real immersions, to the given ι. • Let T be the space of oriented totally real submanifolds obtained as the quotient of P by the group 2 Diff(L) of orientation-preserving diffeomorphisms of L. We shall view π : P → T , where π is the natural projection, as a principal fibre bundle with respect to the obvious right group action of Diff(L). The totally real condition is open in the Grassmannian of tangent n-planes in M , so it is a "soft" condition: in particular, P is an open subset of the space of all immersions. It thus has a natural Fréchet structure, making it an infinitedimensional manifold. Given any ι ∈ P, we can identify T ι P with the space of all sections of (the pull-back of) the bundle T M over L. Moreover, T is (at least formally) also an infinite-dimensional manifold. Given L ∈ T , its tangent space T L T can be obtained via the infinitesimal analogue of the operation which quotients immersions by reparametrization. Specifically, T L T can be identified with sections of the bundle T M/T L ≃ J(T L) ≃ T L; we conclude that T L T ≃ Λ 0 (T L). The key point is that the totally real condition provides a canonical subspace in T M transverse to T L and a canonical isomorphism of this space with T L; i.e. the (extrinsic) "normal" bundle (defined via quotients) is canonically isomorphic to the (intrinsic) tangent bundle. Remark The action of Diff(L) might not be free; it is guaranteed to be free only for embeddings. We will not worry about this issue, just as we will not be concerned about precise definitions of infinite-dimensional manifolds, Lie groups and bundles. Everything concerning such matters is taken as purely formal, but it provides vital insight into the geometry of T . We refer to [9] for one approach to infinite-dimensional geometry and analysis which could be applied here. Remark Some orientable manifolds, e.g. n-spheres S n , admit an orientationreversing diffeomorphism φ. In this case, reparametrization by φ defines a natural Z 2 -action on the space of immersions; two initial choices of totally real immersion related this way define different (non-homotopic) spaces P, thus T . Other orientable manifolds do not admit such diffeomorphisms: e.g. CP 2 . In this case there is no distinction between submanifolds and oriented submanifolds. A canonical connection and geodesics Differentiating the action of Diff(L) at ι ∈ P we obtain a subspace V ι of the tangent bundle T ι P, canonically isomorphic to the Lie algebra Λ 0 (T L) of vector fields on L. The space V ι is the kernel of π * [ι] : T ι P → T π(ι) T and is given by Consider This space gives a complement to V ι , in the sense that there is a decomposition Varying ι in P we obtain a distribution H in T P. Let ϕ ∈ Diff(L) and let ι ∈ P. Let R ϕ denote the right action of ϕ on P, i.e. R ϕ ι = ι • ϕ. We now show that the distribution H is right-invariant. Proof: Let Jι * X ∈ H ι . Then Jι * X ∈ T ι P, so by definition there exists a curve ι t in P with ι 0 = ι and dιt dt | t=0 = Jι * X. Thus we may calculate for p ∈ L: Hence (R ϕ ) * Jι * X = J(R ϕ ι) * (ϕ −1 * X) ∈ H Rϕι . By Lemma 2.1 and the general theory of principal fibre bundles, H defines a connection on the principal fibre bundle P. Recall from the general theory that any representation ρ of Diff(L) on a vector space E defines an associated vector bundle P × ρ E over T ; each fibre of this bundle is isomorphic to E. Such a bundle has an induced connection. Parallel sections of this bundle can be described as follows. Choose a curve of submanifolds L t in T . Choose a horizontal lift ι t , i.e. a curve in P satisfying π(ι t ) = L t and d dt ι t ∈ H ιt . Choose any (t-independent) vector e ∈ E. Then the section [(ι t , e)] of P × ρ E, defined along L t , is parallel. We can obtain all such parallel sections simply by varying e. In particular, using the adjoint representation of Diff(L) on its Lie algebra gives the vector bundle P × ad Λ 0 (T L). It is of fundamental importance to us that this bundle is canonically isomorphic to the tangent bundle of T , via (1) Remark When M is complex (so J is integrable), we revisit (1) in Sections 6.2 and 6.4 from another viewpoint, as a consequence of Proposition 6.1. The isomorphism (1) implies that the connection given by H on P induces a connection on T (T ). We can then describe parallel vector fields on T as above. Finally, recall that a curve L t is a geodesic if its tangent vector field d dt (L t ) is parallel. We thus obtain the following characterization of geodesics in T . Lemma 2.2 A curve L t in T is a geodesic if and only if there exists a curve of immersions ι t and a fixed vector field X in Λ 0 (T L) such that π(ι t ) = L t and This implies that [ι t * X, Jι t * X] = 0, for all t for which L t is defined. Proof: The form of (2) proves ι t is horizontal, and X ∈ Λ 0 (T L) plays the role of e ∈ E in the general theory. Assume L t is a geodesic defined for t ∈ (−ǫ, ǫ). Let x(s) be an integral curve of X on L, defined for some s ∈ (a, b). Then is an immersed surface in M and ι t * X, Jι t * X represent its coordinate vector fields in the s and t directions, respectively. As such, they commute. Remark The existence of a canonical connection on the space of totally reals appears to be rather surprising. One might wonder why this is not true for the space S of all submanifolds. Although one can show there is a canonical right-invariant horizontal distribution on the space of all immersions I, defined by sections of the normal bundle, one seems unable to view T (S) as a vector bundle associated to I, so it does not receive an induced connection. In other words, the group action on I encodes only intrinsic information, and in general one cannot encode the extrinsic geometry of the normal bundle intrinsically. Convexity Given geodesics, one has a natural definition of convex functionals on T . In the absence of existence results for geodesics, this notion could be vacuous. However, in the presence of geodesics, convex functionals provide powerful tools for analysing the geometry of T . We thus now turn to the existence problem. The geodesic equation Once one has a notion of geodesics on a manifold M, there are two key existence issues which arise: (i) the Cauchy problem, i.e. the short-time existence of geodesics given an initial point and direction, and (ii) the boundary value problem, i.e. the existence of geodesics between two points in M. When M is finite-dimensional, or infinite-dimensional and Banach, the first problem is purely local and can be solved via the standard existence theory for ordinary differential equations. The second problem concerns the global properties of M and relates to the definition of geodesic completeness. In our case the manifold T is infinite-dimensional but only Fréchet, so existence and uniqueness results for geodesics are non-trivial. The goal of this section is to rephrase our notion of geodesics in terms of families of J-holomorphic curves in (M, J). This has several advantages. • It offers a geometrically appealing reformulation of the geodesic equation. • It clarifies the nature of the geodesic equation, indicating for example that it is not elliptic; however, it can be written as a family of elliptic equations. • It opens the door to standard tools in the theory of one complex variable. This viewpoint will lead us, at least when M is complex, to a complete solution of the uniqueness question. It does not give a complete answer to the existence problem, but it does yield useful insight by providing both examples and counterexamples and by suggesting a slight weakening of the notion of solution. A reformulation of the geodesic equation We distinguish three cases: the Cauchy problem, geodesic rays and the boundary value problem. The Cauchy problem. Assume we have an initial L 0 ∈ T and initial direction in T L0 T , which may be identified with a smooth vector field X on the abstract manifold L. Ideally, the initial value problem for the geodesic equation (2) can then be solved as follows. 2. Consider the flow defined by X on L; choose an integral curve x = x(s) of X, where s ∈ I := (a, b). 4. Varying the integral curve x gives a family of J-holomorphic curves ι. Since each point of L belongs to some integral curve, fixing the time parameter t defines a map ι t : L → M which coincides with ι 0 for t = 0. If the J-holomorphic curves depend smoothly on the integral curves, ι t will be smooth. Since immersions form an open set in the space of maps, ι t will be an immersion for small t. Finally, the ι t solve (2) by construction. Though appealing, this procedure entails some difficulties. In particular, we observe the following. • Fix X. To obtain a map defined on L for each t ∈ (−ǫ, ǫ) we need to be able to choose ǫ independent of the integral curve. • Moreover, we would like to find appropriate "uniformly bounded" vector fields such that ǫ is independent of the specific X. This is related to the possibility of defining an "exponential map" from a ball in T L0 T into T . To proceed we must determine the correct framework within which to analyze our J-holomorphic equations. We are trying to solve an elliptic problem on I × (−ǫ, ǫ) by prescribing data inside the domain rather than, say, on the boundary. Notice that the domain itself is not prescribed as ǫ is to be determined. To tackle this problem, it is natural to use the "method of characteristics". The initial data is assigned on the curve I × {0} ⊂ I × (−ǫ, ǫ): this curve is noncharacteristic for our equation, so the method makes sense. Here, the only general existence result available is the Cauchy-Kowalevski theorem, which requires real analytic initial data. This regularity restriction is rather strong: from the geometric viewpoint one wants geodesics in the space of smooth immersions, built as above using maps C ∞ I ×(−ǫ, ǫ), M . On the other hand, when M is complex, standard regularity theory implies that any solution is complex analytic with respect to s + it. In particular, if the solution exists, the initial data ι 0 • x(s) must be real analytic. We conclude that the analytic setting is actually natural for the geodesic problem stated above, where the Cauchy-Kowalevski theorem provides strong existence results in Theorem 3.2. This same reasoning also demonstrates an obstruction to the existence of solutions to the Cauchy problem when the initial data is only assumed to be smooth. It is thus important to introduce a weaker notion of geodesic, as follows. Geodesic rays. In the standard theory of one complex variable, one often studies maps defined on closed domains: holomorphic on the interior, but only smooth or continuous up to the boundary. This leads us to the following. Definition 3.1 Fix L 0 ∈ T and JX ∈ T L0 T . A geodesic ray starting from L 0 with direction JX is a curve of submanifolds L t in T , for t ∈ [0, ǫ), for which there exists a curve of immersions ι t , for t ∈ [0, ǫ), with the following properties: • for t ∈ (0, ǫ), ι t solves the geodesic equation (2); The existence problem for geodesic rays is manifestly different from the Cauchy problem previously described. The boundary value problem. We can now define geodesics between two submanifolds L 0 and L 1 in T as geodesic rays interpolating between them. To prove the existence of such a geodesic it is necessary to find a vector field X on L and smooth, totally real immersions ι t : L → M , for t ∈ [0, 1], so that: • for t ∈ (0, 1), ι t solves (2); As above, we can decompose a geodesic ray into a family of J-holomorphic curves parametrized by the integral curves of X on L, thus defined on domains I × [0, ǫ). For the boundary value problem, each curve provides a J-holomorphic filling between the boundary data prescribed by ι 0 on I × {0} and ι 1 on I × {1}. Remark If X defines a geodesic ray for t ∈ [0, ǫ) then −X defines a geodesic ray for t ∈ (−ǫ, 0] and the two induced families L t coincide, up to time reversal. Existence in the real analytic case The goal of this section is to prove the existence of an "exponential map" on T in the real analytic context, with respect to a Fréchet-type metric, as follows. Theorem 3.2 Let L be a compact real analytic n-manifold, let (M, J) be a real analytic almost complex 2n-manifold such that J is also real analytic, and let ι 0 : L → M be a real analytic, totally real immersion. Fix m, R > 0 and let B(m, R) be the space of real analytic vector fields X on L with 3 There exists ǫ > 0 (depending on m, R) such that, for each X ∈ B(m, R), there is a geodesic (L t ) t∈(−ǫ,ǫ) in T given by immersions ι t : L → M satisfying (2) and ι t | t=0 = ι 0 . In the above generality, the proof is an application of the Cauchy-Kovalewski theorem (see e.g. [16, Chapter 10 Theorem 4]). The key ingredient in this theorem goes under the name "method of majorants". If J is integrable the proof of Theorem 3.2 is more transparent, and the non-integrable case works with the same method. We thus limit ourselves to the integrable setting. To simplify, identify L with its image ι 0 (L) ⊂ M . For each p ∈ L, choose an open polydisk P i ⊆ C n serving as a holomorphic coordinate chart for M , such that V i := P i ∩ R n is a coordinate chart for L. Then choose U i ∋ p which is open and compactly contained in V i . By compactness of L we can extract a finite number of domains so that the U i cover L. We now proceed in two steps. 1. Given m, R > 0 we find ǫ > 0 such that, for X satisfying (3) and x 0 ∈ U i , there exists a unique real analytic integral curve x : We then show that each complexified power series x(s+ it), for |s+ it| < ǫ, takes values in P i . Up to identifications, this allows us to define ι t (s) := By varying x 0 ∈ U i we see that ι t is well-defined on L, and satisfies (2) by construction. The integral curve equation is a system of ODEs of the formẋ(s) = X(x(s)). The existence of a unique smooth solution follows from standard ODE theory. For the proof of Step 1, we review how the method of majorants shows that this solution is real analytic and furnishes a radius of convergence of the corresponding power series. It suffices to focus on the scalar ODE case. Suppose we have open sets 0 ∈ U ⊂ V ⊆ R with U compactly contained in V . Assume we want to solve the scalar ODE: where f is any real analytic function defined on V satisfying equivalently, for any x 0 ∈ U the coefficients of the power series representation f (x) = a n (x − x 0 ) n satisfy a n ≤ m/R n . Recall the following definition. Definition 3.3 The power series A(x) = A n x n is a majorant of the power series a(x) = a n x n , and we write a << A, if |a n | ≤ A n for all n ≥ 0. then the power series of f based at 0 satisfies f << F . The method of majorants shows, by examining the induced equations on the higher derivatives, that the power series of the solution x(s) of (4) satisfies x << ξ, where ξ solveṡ For arbitrary x 0 ∈ U consider the power series of f based at x 0 : f (x) = a n (x− x 0 ) n . If we set y(s) := x(s) − x 0 so that y(0) = 0 we findẏ = a n y n . The bounds (5) imply that this equation for y can again be compared with (6), so that y << ξ. We conclude that x << ξ + x 0 . Equation (6) can be explicitly solved, which gives the radius of convergence of ξ in terms of m, R. We thus obtain (i) a lower bound on the radius of convergence of x(s), for any f satisfying (5) and initial data x 0 ∈ U , and (ii) an upper bound on the values of |x(s)| for s ∈ (−ǫ, ǫ) in terms of ξ(ǫ) + x 0 . In particular, since U is compactly contained in V , by restricting ǫ we may assume that all solutions x(s), for x 0 ∈ U , are contained in V . Step 1 can be proved by applying the same reasoning toẋ = X(x) in each coordinate chart V i . Our assumption that J is integrable allows us to complexify this data by simply complexifying the corresponding power series; if J were only almostcomplex this is where we would use the Cauchy-Kovalewski theorem to prove the existence of such complexified data, i.e. to obtain solutions x(s, t) of the equation Notice that, although x(s) is contained in V i , we should not automatically assume that the values of x(s + it), for |s + it| < ǫ, are contained in P i . However, our method of bounding |x(s)| applies also to |x(s + it)|: this follows from the general fact that one can bound | a n (s + it) n | with |a n ||s + it| n . As explained in Step 2, this concludes the proof of Theorem 3.2. Example: the 1-dimensional case We now turn to smooth data. It is instructive to study the Cauchy problem in the simplest case, where M = C and L 0 is a smooth, closed, Jordan curve. Recall that C \ L 0 has two components: one bounded, one unbounded. We view L 0 as an embedding ι 0 = ι 0 (θ) of the abstract manifold L := R/2πZ. For dimensional reasons any such embedding is totally real. To be concrete, we use the standard orientation on L defined by increasing angles and we assume the embedding is chosen so that L 0 is oriented in the anti-clockwise direction. By our definition, geodesics through L 0 are generated by a choice of tangent vector field X. Since L is parallelizable and has the canonical, positively oriented, vector field ∂θ, we have X = f ∂θ, for some f : L → R. The corresponding geodesic in T is determined by the 1-parameter family of curves This coincides with the geodesic equation (2). In particular, when f ≡ 1 this means that ι is holomorphic with respect to the standard complex structure on the cylinder L × (−ǫ, ǫ). We can use the biholomorphism with the annulus to reparametrize ι as a holomorphic map g := ι • φ −1 : A → C. Our choice of orientations imply that, as t increases from 0, the geodesics invade the bounded component of C \ L 0 . We now show that the f ≡ 1 case is, in some sense, general. Indeed, using the ideas of Section 3.1, we can integrate X = f ∂θ. If f has no zeros, i.e. X never vanishes, then the integral curve through any point of L is periodic and its parameter set is compact: we can identify it with S 1 R := R/2πRZ, for some R > 0. The integral curve is then a (possibly orientation-reversing) diffeomorphism The map ι(θ(s), t) is holomorphic on the cylinder S 1 R ×(−ǫ, ǫ) (with the standard complex structure). Again, we can use the biholomorphism with the annulus to reparametrize ι as a holomorphic map g : A R → C. Notice that this implies a rescaling of the initial vector field X. We summarize this discussion as follows. Lemma 3.4 Fix L 0 and a nowhere-vanishing vector field X as above. Then, up to a rescaling of X, the geodesic family of curves L t defined by this data is equivalent to a holomorphic map g defined on an annulus in C containing S 1 . Each L t is the image under g of some circle {|z| = r}; in particular L 0 is the image of S 1 and X corresponds to ±∂θ depending on the sign of f . Assume, for example, f > 0. Then, as the radial parameter r decreases from 1, the corresponding curves invade the bounded component of C \ L 0 . Remark Our discussion above indicates that the 1-dimensional case has a special feature. Recall from Lemma 2.1 that the horizontal distribution H on P is invariant under reparametrization. This means we can find all geodesics through L 0 by fixing any initial parametrization ι 0 and considering all possible vector fields: the geodesic in T defined by a different choice (ι 0 • φ, X) will coincide with the geodesic defined by (ι 0 , φ * X). Thus, in general there is no advantage to changing the parametrization. In dimension 1, however, Diff(L) acts transitively on the non-vanishing vector fields (up to a change of scale). Above, we use this to bring X into "standard form" ∂θ, thus reducing the geodesic equation (2) to the standard Cauchy-Riemann equation. However, note that, when f < 0, this strategy clashes with our initial decision to work with oriented submanifolds, i.e. to only use orientation-preserving diffeomorphisms: this is easily fixed by the observation that the geodesics defined by X and −X coincide, up to time reversal. To find all geodesics, it it thus enough to concentrate on those for which f > 0. A similar remark applies to vector fields with zeros (see below). If the vector field X = f ∂θ has zeros, then between any two zeros the new parameter set will be R and the geodesic equation (2) pulls back to the standard Cauchy-Riemann equation on R × (ǫ, ǫ). The zeros correspond to stationary points of the family of curves. As already mentioned, there is a necessary condition for the existence of solutions to this equation: the initial curve must be real analytic. This condition is also sufficient: given a local power series expansion of ι 0 with respect to the real variable θ, we obtain a holomorphic extension by replacing θ with θ + it. In Section 3.1, in order to remain in the smooth category, we introduced geodesic rays. Using the above ideas, we can study geodesic rays in the 1dimensional case and obtain a conclusion analogous to Lemma 3.4. In particular, the geodesic ray defined by L 0 and a non-vanishing vector field X is equivalent to a holomorphic map g defined on an annulus in C of the form R 1 < |z| < 1, smooth up to |z| = 1. Elliptic regularity theory shows that, if the boundary data is smooth, then g is smooth up to |z| = 1 even if in Definition 3.1 we assumed the geodesic ray were only continuous with respect to t, at t = 0. These results allow us to study the existence of geodesics using holomorphic function theory. Geodesics via Fourier theory. We showed above that an initial curve L 0 and non-vanishing vector field X can be parametrized (up to rescaling X) via a map γ : S 1 → C and the standard vector field ∂θ. By Lemma 3.4, this data defines a geodesic if and only if it admits a holomorphic extension g on an annulus A. Recall from standard theory that such g admit a Laurent power series representation ∞ n=−∞ a n z n , convergent on A. The coefficients a n coincide with the Fourier coefficients of the periodic function γ. It follows that the existence of g, i.e. of the geodesic, depends on the convergence of the formal power series defined by the Fourier coefficients of γ. Recall that the Fourier coefficients of the curve are square-summable. Conversely, given a square-summable sequence of coefficients a n ∈ C, for n ∈ Z, we can ask whether it defines a curve γ : S 1 → C admitting a holomorphic extension g. Let p 1 (z) = −∞ n=−1 a n z n and p 2 (z) = ∞ n=0 a n z n . • If p 1 and p 2 have radii of convergence R 1 < 1 and R 2 > 1 respectively, then the Laurent series p 1 + p 2 converges on the annulus R 1 < |z| < R 2 and defines an embedding γ of S 1 . The image curve L 0 admits a geodesic corresponding to ∂θ. • If p 1 has radius of convergence R 1 < 1 and p 2 has radius of convergence 1 and converges for |z| = 1, then the Laurent series p 1 + p 2 converges on R 1 < |z| ≤ 1 and defines an embedding γ of S 1 . The image curve L 0 admits a geodesic ray corresponding to ∂θ. • If p 1 and p 2 have radius of convergence 1 and converge for |z| = 1 then the Laurent series degenerates: it converges only on S 1 , defining a curve L 0 which does not admit a geodesic or geodesic ray corresponding to ∂θ. Example 3.5 Set a n := 1/n log n for n ≥ 1. Then ∞ n=1 a n z n has radius of convergence 1 and converges absolutely for |z| ≤ 1, together with all derivatives. Adding this to any series −∞ n=−1 a n z n with radius of convergence R 1 < 1 gives smooth curves which admit geodesic rays but not geodesics corresponding to ∂θ. We can also combine it with −∞ n=−1 |n| − log |n| z n to obtain a smooth curve which admits neither a geodesic nor a geodesic ray corresponding to that ∂θ. To obtain examples which are only continuous, set a n := 1/n 2 . Geodesics via the Riemann Mapping Theorem. Choose two closed Jordan curves L 0 , L 1 in C which do not intersect. Let Ω be the region contained between these curves. A version of the Riemann mapping theorem, cf. [4, Theorem 5.8], proves that there exists an annulus A and a biholomorphism g : A → Ω which extends continuously to the boundary; if L 0 , L 1 are smooth then the biholomorphism extends smoothly to the boundary. The restriction to the boundary provides parametrizations of L 0 , L 1 ; setting X = ∂θ the theorem shows that for any two curves as above it is possible to solve the boundary value problem. Remark Notice the regularizing behaviour of the geodesic equation (2) even for the boundary value problem: for all intermediate times t ∈ (0, 1), the corresponding curves are real analytic. Concluding remarks. We summarize what we have learned from the 1dimensional theory. Given an embedded curve L 0 ⊂ C, we showed the following. • Infinitesimal deformations correspond to parametrizations (via integration of the tangential vector field f ∂θ). • A geodesic in a given direction corresponds to a holomorphic extension of the corresponding parametrization. • Examples show certain curves do not admit geodesics in certain directions. • Given any curve L 0 , there always exist infinite geodesic rays departing from it (corresponding to the arbitrary choice of a second curve L 1 ). This suggests the existence question for geodesics is non-trivial, but not vacuous. A similar situation occurs in the analogous theory concerning Kähler metrics, cf. Section 7. There a weak notion of geodesics was found, leading to a satisfactory existence theory. We expect something similar is needed here. In particular, observe that our geodesic equation (2) is first order, rather than second order as one might except: this corresponds to the fact that, in keeping with the principal fibre bundle viewpoint, it is expressed in terms of the velocity vector (being constant) rather than of the curve. Developing alternative expressions for geodesics and further investigation of the properties of the connection may contribute key ingredients to the existence theory. Further results Some of these same ideas can be extended to higher dimensions. Existence and non-existence results when M = C n . Consider a compact totally real submanifold L 0 in C n and a tangent vector field X. Choose an integral curve x = x(s) and a parametrization ι 0 , with components ι i 0 . If the curve x is closed we can study the existence of holomorphic extensions of γ := ι 0 • x(s) exactly as when n = 1, by examining its component functions ι i 0 • x(s). This does not work if the curve is open, parametrized by R. Notice however that the image of γ is contained in L 0 , so it is bounded. We can thus interpret γ as a (smooth) tempered distribution and replace the role of Fourier coefficients with Fourier transforms. In particular, we expect to obtain information concerning existence of holomorphic extensions of γ using the Paley-Wiener theorems. It is known for example, cf. [15,Theorem 7.23], that if the transform of γ has compact support then γ admits an entire holomorphic extension (satisfying certain growth conditions). Notice that in this case the transform of γ will generally not be smooth, otherwise it would be L 2 so γ would also be L 2 . This applies also to any complex manifold M , as long as the submanifold is contained in one chart. Uniqueness of geodesics. Perhaps the most interesting feature of our reformulation of the geodesic equation (2) is that it gives a fairly complete answer to the uniqueness question. Indeed, by restricting ι 0 to each integral curve we see that it suffices to prove the following claim: any two J-holomorphic maps ι(s, t), ι ′ (s, t) which coincide for t = 0 coincide for all t. In the holomorphic case (when J is integrable) the proof is simple. As above, ι corresponds locally to a collection of holomorphic functions, defined by its components in C n . Uniqueness for the Cauchy problem then follows from the standard identity principle for holomorphic functions. Uniqueness for geodesic rays follows instead from the standard reflection principle. If J is only almost complex the situation is more subtle. Uniqueness for the Cauchy problem is then a consequence of the "unique continuation theorem" for J-holomorphic curves, cf. [14,Theorem 2.3.2]. It seems reasonable that, using results in the literature, one could also prove uniqueness for geodesic rays. Remark In the real analytic case the uniqueness of real analytic solutions is part of the Cauchy-Kowalevski theorem. One might hope to improve on this, obtaining uniqueness in the smooth category, using Holmgren's uniqueness theorem, cf. [17,Chapter 21]. However, Holmgren's theorem concerns only linear equations and this corresponds to an important difference between the holomorphic and pseudo-holomorphic equations. In the former case, in local coordinates, the operator J is constant so the Cauchy-Riemann equation is indeed linear. Holmgren's theorem thus applies to give an alternative proof of the uniqueness of geodesics and geodesic rays. In general almost complex manifolds, instead, the Cauchy-Riemann equation is not locally linear. Example: the 1-dimensional case, continued We now examine the notion of geodesic convexity from Definition 2.3 in the 1-dimensional case by exhibiting an example of a convex functional. This functional is well-known: it is the standard length functional. Its convexity is a rather striking fact, and it is worth emphasizing it by giving two proofs. The first relies on the specific nature of the geodesic equation by bringing into play basic holomorphic function theory. As above, it assumes we have reparametrized the curve by integrating f ∂θ, but it requires that the domain remains compact. This first proof also leads to a monotonicity result for the length functional. The second proof is a direct geometric calculation, and holds for all f . We want to prove that λ is convex with respect to t. The biholomorphism z = φ(s, t) in (10) allows us to reformulate the problem in terms of a map g = g(z) : Notice that g is holomorphic if and only if γ is holomorphic and their complex derivatives satisfy | ∂γ ∂w | = (1/R)|φ ∂g ∂z |. Using polar coordinates on C and setting it follows that λ(t) = Λ(e −t/R ). It thus suffices to prove that Λ • exp is convex, i.e. that on any segment [t 1 , t 2 ] the graph of t → Λ(e t ) is below the graph of the linear function passing through the points (t 1 , Λ(e t1 )), (t 2 , Λ(e t2 )). Notice that z ∂g ∂z is holomorphic, so its norm u := |z ∂g ∂z | is a subharmonic function on the annulus. Convexity is then a classical result due to Riesz and proved as follows. Let v denote the harmonic function on the annulus A := {z : where dσ = r dθ. Since v is harmonic, the divergence theorem shows that r → 2π 0 ∂v ∂n dσ is constant, so by subharmonicity for some constants a, b ∈ R; our choice of boundary data implies that equality holds when r = r 1 or r = r 2 . Changing variables we obtain the desired property of Λ(e t ). Similar methods show that if g extends to a holomorphic function on the disk then Λ(r) is non-decreasing, so λ(t) is non-increasing. For the second proof, we first parametrize the curve by arclength: it is thus the image of some map γ 0 (s), where s ∈ L. Choose a vector field X = f ∂s and let γ(s, t) = γ t (s) be a family of curves satisfying the corresponding geodesic equation (7). Set γ ′ := ∂γ ∂s andγ := ∂γ ∂t for a cleaner exposition. The length functional along γ t is given by We first calculate Then Moreover, Since we will be taking no further t derivatives and γ 0 was arbitrary we can now set t = 0 without loss of generality. In this case, because |γ ′ 0 | = 1 we see that γ ′′ 0 , γ ′ 0 = 0 and thus γ ′′ 0 = iκγ ′ 0 where κ 0 is the curvature of γ 0 . Hence, We therefore see thaẗ Putting these formulae together we see that We deduce that since L (f f ′ ) ′ ds = 0. Therefore the length λ(γ t ) is a convex function of t. A canonical volume functional In higher dimensions the standard Riemannian volume functional is not convex with respect to our notion of geodesics. This is hardly surprising: when n ≥ 2 the totally real condition is an extra assumption, but the volume functional does not interact with this condition. The goal of this section is to show that, for totally reals, there is an alternative volume functional which (i) is canonical, (ii) depends on the totally real condition and (iii) is convex in certain situations. To define this functional we need an alternative characterization of totally real planes: an n-plane π in T p M is totally real if and only if α| π = 0 for all (or, for any) α ∈ K M (p) \ {0}, where K M is the canonical bundle of (M, J). Notice that n-planes π in T p M which are not totally real contain a complex line: a pair {X, JX} for some X ∈ T p M \ {0}. We call such n-planes partially complex. We then say that an n-dimensional submanifold is partially complex if this condition holds in the strongest sense, i.e. if each of its tangent spaces is partially complex. Let T R + denote the Grassmannian bundle of oriented totally real n-planes in T M and let π ∈ T R + (p). Let v 1 , . . . , v n be a positively oriented basis of π. We can then define This allows us to define a non-zero form v * 1 ∧ . . . ∧ v * n ∈ K M (p). The form we have constructed depends on the choice of basis v 1 , . . . , v n . We fix this by assuming we have a Hermitian metric h on K M , and then define This form has unit norm and is now independent of the choice of basis. We have thus defined a map between bundles 4 σ : T R + → K M covering the identity. We also see that the restriction of σ[π] to π is a real-valued n-form. Now let ι : L → M be an n-dimensional totally real immersion. We can then obtain global versions of the above constructions as follows. Canonical bundle over L. Let K M [ι] denote the pullback of K M over L, so the fibre of K M [ι] over p ∈ L is the fibre of K M over ι(p) ∈ M . This defines a complex line bundle over L which depends on ι. Observe that any complex-valued n-form α on T p L defines a unique n-form α on T ι(p) M by identifying T p L with its image via ι * and setting, e.g., The totally real condition implies that this is an isomorphism: K M [ι] is canonically isomorphic, via ι * , with the (ι-independent) bundle Λ n (L, C) := Λ n (L, R)⊗ C of complex-valued n-forms on L. When L is compact we obtain a "canonical volume" L vol J [ι], for ι ∈ P. One may see that if ϕ ∈ Diff(L) then vol J [ι • ϕ] = ϕ * (vol J [ι]), just as for the standard volume form, thus Hence the canonical volume descends to T to define the J-volume functional where ι is any parametrization representing the submanifold L. Already in this context it would be possible to study its first variation, thus characterizing its critical points. Using the connection on T one could also define its second variation, studying the stability properties of the critical points. We will do this below, in the presence of additional structure and hypotheses on M which will allow us to determine a useful expression for the first variation and a simplified formula for the second variation. Notation. From now on we will sometimes simplify notation by dropping the reference to the specific immersion used. Since this is standard practice in other contexts, e.g. when discussing the standard Riemannian volume, we expect it will not create any confusion. The J-volume in the Hermitian context Assume now that (M, J) is almost Hermitian, i.e. we have a Riemannian metric g on M compatible with J, so J is an isometry defining a Hermitian metric h on M . We also choose a unitary connection ∇ on M . Let L be an oriented totally real submanifold of (M, J). In Riemannian geometry it is customary to work with tangential and normal projections π T , π ⊥ and the Levi-Civita connection ∇. This however does not make use of the totally real condition which implies that, for any p ∈ L, any vector Z ∈ T p M can be written uniquely as Z = X + JY where X, Y ∈ T p L. This splitting induces projections π L , π J by setting π L (Z) = X and π J (Z) = JY : these are the natural projections in this context. The following fact is a simple computation. The structures on M induce structures h, ∇ on K M , which we can use to define the J-volume form on L. Notice that, in contrast to Section 4 where we only had a complex structure, we now have the 2-form ω(·, ·) = g(J·, ·). In this section we can thus also discuss Lagrangian submanifolds, defined by the condition ι * ω = 0. The J -volume versus the Riemannian volume In the almost Hermitian context, given an immersion ι, we can define the usual Riemannian volume form vol g , using the induced metric g. It is useful to compare this with the J-volume form, cf. also [11]. Let e 1 , . . . , e n be a positive orthonormal basis for π and set h ij = h(e i , e j ), where h is the ambient Hermitian metric. We wish to calculate |e * 1 ∧ . . . ∧ e * n | h . Observe that h(., e j ) = h kj e * k since We therefore find that We can now obtain a well-defined function ρ J : T R + → R, ρ J (π) := vol J (e 1 , . . . , e n ) = (det C h ij ) 1/2 , because this quantity is independent of the orthonormal basis e 1 , . . . , e n . Restricting ρ J to an oriented totally real submanifold L, we obtain the identity: vol J = ρ J vol g . Notice that h = g − iω and that, using the obvious notation for the components of g and ω with respect to e 1 , . . . , e n , We see that ω ij = g(Je i , e j ) and − ω ij = g(e i , Je j ). Hence we see that ρ J (π) ≤ 1 with equality if and only if π is Lagrangian. More generally, given any basis v 1 , . . . , v n for π, we can write We can set ρ J (π) = 0 when π is partially complex and extend the map σ to all n-planes, just setting σ[π] = 0 if π is partially complex. This is particularly reasonable in this almost Hermitian setting, where there is a natural topology on the Grassmannian of n-planes: this choice of extension of σ would be justified by the fact that it is the unique one which preserves the continuity of σ. Applying these observation to submanifolds we deduce the following. Proof: The first statement follows from (12). To prove the second, let L t be a 1-parameter family of totally real submanifolds such that L 0 is Lagrangian. Set f (t) := Vol J (L t ) and g(t) := Vol g (L t ). Then f ≤ g so g − f ≥ 0. Equality holds when t = 0: this is a minimum point, so it is necessarily critical. It follows that f ′ (0) = g ′ (0). Since this holds for any 1-parameter family, we obtain the desired conclusion. First variation of the J -volume Proposition 5.3 Let ι t : L → L t ⊆ M be a one-parameter family of totally real submanifolds in an almost Hermitian manifold and let ∂ ∂t ι t | t=0 = Z. Set ι = ι 0 and g = ι * ḡ . If Z = X + JY for tangent vectors X, Y then where at p ∈ L we have that e 1 , . . . , e n is a g-orthonormal basis for T p L. Notice that the quantities which appear in the above formulae are invariant under changes of orthonormal basis and so are globally defined. Since [Z, e i ] = 0, we have that where T is the torsion of ∇. The first part of the result follows. We see that for Z = X tangential using Cartan's formula, which gives the result. Now suppose that L is compact (without boundary). We can then define the J-volume of L as before, using (12), by: . . . , e n , Je 1 , . . . , Je n ) vol g , where e 1 , . . . , e n is an orthonormal basis for each tangent space. We now compute the first variation of Vol J . By Proposition 5.3 if Z = X + JY for X, Y tangential then since L div(ρ J X) vol g = 0 by Stokes' Theorem. Hence it is enough to restrict to Z = JY . Our final result is phrased in terms of a "J-mean curvature vector field" defined as follows. We use the metricḡ to define the transposed operators which means JX ∈ (T p L) ⊥ and thus X ∈ J(T p L) ⊥ . Then, using the tangential projection π T defined using g, one may check that π T J ∇ π t L : T p L × T p L → T p L is C ∞ -bilinear on its domain, so it is a tensor and its trace is a well-defined vector on L. We now set This is a well-defined vector field on L, taking values in the bundle J(T L). We refer to [11] for an alternative expression for H J . Proposition 5.4 Let ι t : L → L t ⊆ M be compact totally real submanifolds in an almost Hermitian manifold and let ∂ ∂t ι t | t=0 = X + JY for X, Y tangential. where, given p ∈ L and an orthonormal basis e 1 , . . . , e n for T p L we have H J given by (16) and Proof: Using the definition of H J in (16), we calculate that where we used −Jπ T J(JX) = JX. The formula follows from (15). Remark If we define a Riemannian metric G on T by for JX, JY ∈ T L T then the downward gradient vector field of the J-volume functional Vol J on T with respect to G is given by H J + S J . The corresponding flow (the "J-mean curvature flow") is studied in [11]. Second variation of the J -volume We now study the stability of critical points of the J-volume, so we calculate the second variation of the J-volume form. This generalises calculations in [2,Proposition 3], which built on the second variation of volume of Lagrangians in Kähler manifolds derived by Chen, Leung and Nagano [3, Theorem 4.1]. Proposition 5.5 Let ι s,t : L → L s,t ⊆ M be a two-parameter family of totally real submanifolds in an almost Hermitian manifold and let ∂ ∂s ι s,t | s=t=0 = W and ∂ ∂t ι s,t | s=t=0 = Z. Then = g(π L J( ∇ ei W + T (W, e i )), e j )g(π L J( ∇ ej Z + T (Z, e j )), e i ) − g(π L ( ∇ ei W + T (W, e i )), e j )g(π L ( ∇ ej Z + T (Z, e j )), e i ) + g π L ∇ ei W + T (W, e i ) , e i g π L ∇ ej Z + T (Z, e j ) , e j + g(π L ( R(W, e i )Z + ∇ ei ∇ W Z), e i ) where at p ∈ L we have that e 1 , . . . , e n is an orthonormal basis for T p L. Notice again that these quantities are independent of the orthonormal basis chosen for T p L and hence are globally defined. Hence we may calculate, substituting ∇ W for ∂ ∂s | s=t=0 and ∇ Z for ∂ ∂t | s=t=0 : The first and fourth terms in (18) both give The second and fifth terms in (18) both give Finally, the third and sixth terms in (18) both give Clearly, by (14), we have: and hence Replacing ∇ W e i = ∇ ei W + T (W, e i ) (since [W, e i ] = 0) and similarly for Z gives the first three terms in the statement. For the remaining terms, we have Again replacing ∇ W e i = ∇ ei W + T (W, e i ) gives the final terms. A simple special case is as follows. Corollary 5.6 Use the notation of Proposition 5.5. If W = Z = X tangential then Proof: We can see this from using Cartan's formula twice. A more important case is the following. Proposition 5.7 Use the notation of Propositions 5.4 and 5.5. If W = Z = JY where Y is tangential and M is Kähler, then Proof: In this setting ∇ = ∇, the Levi-Civita connection, and T = 0. Hence by Proposition 5.5 we have Moreover, Proposition 5.5 and Corollary 5.6 give us that div(Y div(ρ J Y )) vol g = g(π L J∇ ei Y, e j )g(π L J∇ ej Y, e i ) − g(π L ∇ ei Y, e j )g(π L ∇ ej Y, e i ) We observe that, since ∇J = 0 as M is Kähler, g(π L J∇ ei (JY ), e j )g(π L J∇ ej (JY ), e i ) − g(π L ∇ ei (JY ), e j )g(π L ∇ ej (JY ), e i ) = g(π L ∇ ei Y, e j )g(π L ∇ ej Y, e i ) − g(π L J∇ ei Y, e j )g(π L J∇ ej Y, e i ). Hence, we see that We see that, using the Kähler condition, We deduce that Therefore, using (19) and (20) we see that As in Propositions 5.3 and 5.4, since here T = 0, we see that The result now follows. We deduce the following important second variation of the J-volume functional in the Kähler setting, as an immediate corollary of Proposition 5.7. Proposition 5.8 Let ι t : L → L t ⊆ M be compact totally real submanifolds in a Kähler manifold and let ∂ ∂t ι t | t=0 = JY for Y tangential. Then If L is a critical point of Vol J , so H J = 0, then all the terms in the integrand in (21) are automatically non-negative except for −Ric(Y, Y ), so we can ensure non-negativity by imposing an ambient curvature condition. We deduce the following, which first appeared in [2]. Corollary 5.9 In a Kähler manifold with Ric ≤ 0 (respectively, Ric < 0), the critical points of the J-volume functional are stable (respectively, strictly stable). Remark We can repeat our discussion in the almost Hermitian setting, but the appearance of torsion terms means the second variation formula is more complicated and does not, as far as we are aware, naturally lead to a stability property for the critical points of the J-volume functional as in the Kähler case. Convexity of the J -volume Stability is an infinitesimal condition. We now want to show that we can obtain a much stronger result by taking into account our notion of geodesics in T . To start, notice that if Ric(Y, Y ) ≤ 0 then everything in the second variation formula (21) is non-negative except potentially for −g(π J (∇ JY JY +∇ Y Y ), H J ). We also see that, by locally extending Y in a neighbourhood of L, Hence, if we deform L in a direction JY such that [JY, Y ] = 0, which is equivalent to the local diffeomorphisms of L generated by Y and the deformations of L generated by JY commute, then Vol J is convex in the direction JY , in the sense that the second variation is non-negative. This condition is precisely guaranteed by our notion of geodesic from Lemma 2.2 so we deduce the following. Theorem 5.10 In a Kähler manifold with Ric ≤ 0 (respectively, Ric < 0), the J-volume functional is convex (respectively, strictly convex) in the sense of Definition 2.3. Critical points of the J -volume Analogously to the Riemannian setting we say that a totally real submanifold is J-minimal if it is a critical point for the J-volume, i.e. if H J = 0. Recall from Lemma 5.2 that the J-volume coincides with the standard volume on Lagrangians and that the sets of J-minimal Lagrangians and minimal Lagrangians coincide. In [11] we show that this result can be improved by adding assumptions on the ambient manifold. Specifically, we prove the following. Proposition 5.11 Assume M is Kähler-Einstein with Ric = 0. Then the sets of J-minimal totally real submanifolds and minimal Lagrangians coincide. The case of Kähler Ricci-flat, in particular Calabi-Yau, manifolds is special. In this case, by a calibration argument, J-minimal totally real submanifolds are Vol J -minimizers: we call them "special totally real submanifolds", in analogy with the well-known class of special Lagrangians, and study them in [11]. The following uniqueness result is a simple consequence of strict convexity. Corollary 5.12 Let M be a Kähler manifold with Ric < 0 and L 0 ∈ T be J-minimal. Let Γ ⊆ T denote the set of totally real submanifolds which can be connected to L 0 by a geodesic ray. Then L 0 is the unique J-minimal submanifold in Γ. Abstract framework We now introduce an abstract framework which will help us clarify and continue to analyze the structure of T . A canonical connection on homogeneous spaces Let G be a finite-dimensional Lie group. Let L and R denote the left and right multiplication operators, Ad the adjoint action of G on G and ad its differential, inducing an action of G on T e G. Let g denote the Lie algebra of G, i.e. the space of L-invariant vector fields. In the course of this section it will be useful to emphasize the distinction between T e G and g, using the notation to refer to the isomorphism induced by L. Given X ∈ T e G, consider the 1-dimensional subgroup of diffeomorphisms of G defined by the flow φ t ofX: The isomorphisms (L g ) * : T e G → T g G identify each tangent space with T e G, thus inducing a canonical way to differentiate vector fields. This yields a connection on T G, known as the canonical L-invariant connection. The parallel vector fields are the elements of g, so the flowlines of φ t are the geodesics of this connection. In particular, the geodesics through e are the 1-parameter subgroups exp(tX) := φ t (e). The L-invariance of the connection implies that L preserves the geodesics. This is reflected in the fact that, for any g ∈ G, L g φ t coincides with the flowline passing through g. Let h, m denote the corresponding L-invariant distributions on G so that T G = h ⊕ m. Consider the projection π : G → G/H, viewed as an H-principal fibre bundle. Notice that h is tangent to the R H -action: choosing g ∈ G and a 1-parameter subgroup h t in H, we see that This is a manifestation of the fact that L-invariant vector fields are the fundamental vector fields of the R-action. We also see m is R H -invariant: given X ∈ M so that (L g ) * X ∈ m |g , we have The splitting T G = h ⊕ m thus defines a connection on the principal fibre bundle, and induced connections on all associated bundles G × ρ V , where ρ is a G-action on the vector space V . The following result shows that one of these bundles is particularly relevant to the geometry of G/H. Proposition 6.1 There is an isomorphism Thus G/H has a canonical connection induced from the connection m on the principal bundle G. Geodesics in G/H are of the form π(g t ), where g t is a horizontal curve in G satisfying, for some fixed X ∈ M , In other words, geodesics in G/H are the projections of geodesics in G defined by the canonical L-invariant connection, with initial direction in M . Geometry of complexified Lie groups Let G c be a complexified Lie group, i.e. a complex Lie group with Lie algebra isomorphic to g ⊗ C. We now study the homogeneous space G c /G. The maps L and R are holomorphic, so each operator ad g : T e G c → T e G c commutes with the complex structure J on G c . This implies that M := J(T e G) is ad G -invariant, so we can apply the above theory using the splitting g ⊗ C = g ⊕ ig. According to Proposition 6.1, there is an isomorphism It follows that G c /G has a canonical connection, whose geodesics are the projection of curves g t in G c satisfying, for some X ∈ T e G, which is an ODE on G c . If G c is infinite-dimensional there may be no solutions; however, if a solution does exist for a given initial point, it will exist for any initial point because (24) is L-invariant. In particular, the solution corresponding to the initial point e ∈ G c is the 1-parameter subgroup exp(tJX) ⊂ G c . We can also try to integrate X, obtaining a 1-parameter subgroup exp(sX) ⊂ G. Assume these subgroups exist. Consider the real 2-dimensional distribution in T G c generated by X and JX. Since the Lie bracket commutes with J we see that [X, JX] = 0, so the distribution is integrable and our integrations yield a 1-dimensional complex abelian Lie subgroup of G c , spanned by exp(sX), exp(tJX). Abstractly, it is the complexification of the Lie group exp(sX); it is isomorphic to S 1 × R or to C depending on whether exp(sX) is compact or not. Summarizing, the geodesics in G c /G are equivalent (through projection and L-invariance) to the real 1-parameter subgroups in G c generated by JX, or to the complex 1-parameter subgroups in G c generated by X, for X ∈ T e G. The above applies also to the boundary value problem for geodesics in G c /G: any geodesic γ(t), for t ∈ [a, b], interpolating between two points in G c /G lifts to a holomorphic map Σ → G c , where Σ := S 1 × [a, b] or R × [a, b], with prescribed boundary values. More generally one can study the existence of holomorphic maps Σ → G c with given boundary values, where Σ is any Riemann surface with boundary. Notation. From now on we will often relax the distinction between T e G and g, and the corresponding distinction between X andX. Definition 6.2 A function f : G c /G → R is strictly convex if it is strictly convex when restricted to all geodesics in G c /G. Equivalently, if the lifted function F := π * f : G c → R satisfies JX(JX(F )) = d 2 dt 2 (F • g t ) > 0, for all geodesics g t in G c with velocity JX, for some X ∈ T e G. Proposition 6.3 Any strictly convex function f : G c /G → R lifts to a Kähler potential F := π * f on G c . Proof: Consider the 2-form ω f := i∂∂F = 1 2 dd c F defined on G c . By construction it is of type (1, 1). We need to show that it is positive, i.e. that the symmetric tensor ω f (·, J·) is positive definite. This is a pointwise statement which must be tested on every vector in T g G c = (g + ig) |g , for all g ∈ G c . Equivalently, it suffices to prove that ω f is positive when restricted to any complex line. Since ig is totally real of maximal dimension, it must intersect the line so we may assume our line is generated by a vector X in ig. For our computation it is then sufficient to consider the restriction of F to the submanifold of G c obtained by integrating the vector fields X, JX. We now see our problem corresponds to the n = 1 case of the following fact: given f : R n → R and F := π * f : R 2n → R, if f is strictly convex then i∂∂F is positive. Indeed, it is simple to compute that Trivially, ω f is closed so the result follows. Proposition 6.3 shows that any strictly convex function f on G c /G defines a Kähler structure ω f on G c . As G acts holomorphically on G c and preserves the Kähler potential, it preserves ω f . Let Crit(f ) = {p ∈ G c /G : df |p = 0} be the set of critical points of f . Proposition 6.4 The action of G on G c , endowed with a Kähler structure ω f , is Hamiltonian with moment map In particular, µ −1 f (0) = π −1 Crit(f ) . Since f is strictly convex, Crit(f ) is either empty or a unique point, so µ −1 f (0) is either empty or a unique G-orbit in G c . Proof: For any X ∈ g, duality with g * defines a function We need to show that X is the Hamiltonian vector field associated to µ f , i.e. d(−i X d c F ) = 2i X ω f . Using Cartan's formula, we see The first term vanishes because both F and J are preserved by the action of G. To conclude, notice that g ∈ G c lies in µ −1 f (0) if and only if (dF • J) |g (X) = dF |g (JX) = 0 for all X ∈ g. Since F is G-invariant this is equivalent to dF |g = 0, thus df |π(g) = 0. Existence of critical points via a stability condition The interpretation of critical points of f : G c /G → R as zeros of a moment map is geometrically interesting but in itself does not bring us closer to understanding whether, in any specific situation, such points exist. If however we can embed G c with its given structures into a larger Kähler manifold M , we can sometimes apply the following general framework for studying this existence problem. Let (M, g, J, ω) be a Kähler manifold endowed with a G-action preserving g and J. To simplify matters we assume this action is free. Let us assume that G c also acts on M , preserving J: this gives a family of G c -orbits in M . As a first step, we are interested in finding situations where each orbit O admits a canonical function F such that ω |O = i∂∂F as in Proposition 6.3. An example of this is as follows. Assume M is polarized, i.e. there is a holomorphic line bundle L over M with a Hermitian metric such that the Chern connection has curvature Θ = iω. Recall the standard formula for Θ in terms of a local holomorphic section: Θ = ∂∂ log |σ| 2 . It follows thatω = i∂∂ log |σ| 2 . If the action of G is Hamiltonian, the moment map yields a canonical lift of the infinitesimal action of G to the total space of L, cf. [6, Section 6.5] for details. Let us assume this integrates to an action of G c . Any point in L then generates a G c -orbitÕ which projects to a G c -orbit O in M . Let us think ofÕ as the graph of a non-vanishing holomorphic section σ : O ⊂ M → L. Consider the function F := log |σ| 2 : O → R. By construction G preserves the Hermitian metric so F = π * f , for some f : G c /G ≃ O/G → R. It turns out that f is strictly convex in our sense; the formula for ω corresponds exactly to the situation of Proposition 6.3. In the above situation we say an orbit O is stable if f is proper, so it admits a critical point. Let M s denote the set of points in M whose corresponding G c -orbits are stable. According to Proposition 6.4 there is a 1:1 mapping The key point is that, in specific situations, stability of a given orbit can sometimes be tested using purely holomorphic information on M and the G c -action. We thus get a correspondence between holomorphic and symplectic data on M , addressing the existence of critical points of the functions f . Summarizing: if we can embed our given complexified group G c , endowed with the structure ω f defined by a strictly convex function f : G c /G → R, into some Kähler M with a G c -action so that it coincides with one of these orbits, then we can hope to test the existence of critical points of f by verifying some type of "stability condition" on that orbit. Extension to infinitesimal complexifications Finite-dimensional examples of stability and its relation to existence problems are a classical topic of Algebraic Geometry, related to Geometric Invariant Theory and the Kempf-Ness theorem. Gauge theory provided the first context in which this abstract framework arose in an infinite-dimensional setting: this is related to the Hitchin-Kobayashi conjecture concerning the existence of Hermitian-Einstein connections on a given Hermitian vector bundle E over a Kähler manifold, cf. [6] for details. In this case G is the group of unitary transformations of E, and its complexification G c is the group of automorphisms of E. In general when G is infinite-dimensional there does not exist a complexification G c , cf. [10]. The above theory can thus not be applied. For this reason Donaldson [5] introduced a slightly weaker notion of "infinitesimal complexification" of G. In this framework we can recover the above results, as follows. The above data defines an infinitesimal Lie group Z with Lie algebra V . Definition 6.6 Let (Z, J) be a complex manifold. Assume there exists a Lie group G acting freely on the right on Z and preserving J. Given X ∈ g, letX be the corresponding fundamental vector field: specifically, if X is the infinitesimal deformation of the 1-parameter subgroup g t theñ This defines an injection g → Λ 0 (T Z) preserving the corresponding Lie brackets. Consider the extended map g ⊗ C → Λ 0 (T Z), X + iY →X + JỸ . of the Nijenhuis tensor implies that also L JX J = 0. It follows that the image of the map (25) is closed under the Lie bracket on Z and that this Lie bracket is J-linear, i.e. the image is a complex Lie algebra. Thus (25) defines a complex Lie algebra isomorphism onto its image. We then say that Z is an infinitesimal complexification of G. Given Z and G as above, we can view π : Z → Z/G as a principal G-bundle. The fundamental vector fieldsX define the "vertical space", i.e. the kernel of π * . The space of fields JX defines a complementary distribution, which is G-invariant because G preserves J. In other words, the splitting T Z ≃ g ⊕ ig defines a connection on Z, thus on all associated bundles. A priori there is no adjoint action of G on ig, because there is no actual group G c inducing it. We can however define an ad hoc action using the adjoint action of G on g, as follows: ad G : G → GL(ig), ad g (iX) := i ad g (X). This allows us to define the associated bundle Z × adG (ig). Proof: The main issue is to check that the map is well-defined, i.e. that the images of [ζ·g, ad g −1 X] and of [ζ, X] coincide. It suffices to prove that ad g −1 X |ζ·g = (X |ζ ) · g, which is a simple computation. Remark When Z = G c is a standard complexification, this construction coincides with the previous one in (23) because (L g ) * X is the fundamental vector fieldX of the right action of G on G c . As above, it follows that Z/G has a canonical connection whose geodesics are the projection of curves ζ t in Z satisfying, for some X ∈ T e G, d dt ζ t = JX |ζt . As in Section 6.2 this is an ODE: solving it corresponds to integrating the vector field ζ t → JX |ζt in Z. This problem is G-invariant, but here there is no notion of G c -invariance. We can complexify geodesics by combining them with solutions to d ds ζ s =X |ζs , thus obtaining holomorphic curves in G c . The analogues of Propositions 6.3 and 6.4 continue to hold in this context. Each Q f is an orbit of this action so π : Q → H is a principal G-bundle. For any symplectic structure ω we let Ham(X , ω) denote the Lie algebra of Ham(M, ω). Its elements are the vector fields X ω h satisfying the equation dh = ω(X ω h , ·), for some function h : M → R. In the Kähler setting it follows that X ω h = −J∇ ω h, where ∇ ω is the gradient operator defined by the induced metric g := ω(·, J·). We can choose h uniquely by ensuring it belongs to C ∞ ω (M ). We can then identify the Lie algebra Ham(M, ω) with the Lie algebra C ∞ ω (M ), endowed with the natural Poisson bracket on functions (up to sign). We can express the right-hand side of (28) as the gradient ∇ ω f t (ḟ t ). We thus arrive at the final expression for geodesics on H: where we use ∇ t , | · | t to indicate we are using the metric induced by ω ft . Remark It is useful to compare (29) and (30): the former is first order, expressing the fact thatḟ t coincides with a parallel vector field; the latter is second order and uses that C ∞ (M ) is a vector space, thus has a natural connection. In other words, (30) describes geodesics of the canonical connection on Q/G in terms of the natural connection on C ∞ (M ). We now turn to the problem of finding cscK metrics in H. It turns out that there is a functional f : H → R, due to Mabuchi, with the following properties: • f is convex with respect to the geodesics defined above; • the critical points of f are precisely the potentials of cscK metrics in H. Now consider the space J of integrable complex structures on M which are compatible with ω 0 . Let G := Ham(M, ω 0 ) act on J by pullback. It can be shown that J has a canonical Kähler structure which is preserved by the action of G. Furthermore, it is possible to embed Q, together with the Kähler structure defined by f according to Proposition 6.3, into J : this is described e.g. in [7, Chapter 9], so we do not review it here. As in Section 6.2, this embedding provides strong geometric motivation for the Yau-Tian-Donaldson conjecture concerning stability conditions related to the existence of a cscK metric in H. Complexified diffeomorphism groups Let (M, J) be a complex manifold. In Section 7 we argued that Diff(M ) inherits a complex structure which is formally integrable. The same construction applies to the space of immersions I of L into M ; the space P is then an open subset of I, so it is formally an infinite-dimensional complex manifold. The action of Diff(L) by reparametrization preserves the complex structure. We can thus reformulate the material of Section 2 in terms of the formalism of Section 6. We conclude the following. • P is an infinitesimal complexification of Diff(L). • Proposition 6.7 applies to P, proving that the tangent space T (T ) can be identified with the adjoint bundle associated to P. • The connection and geodesics defined in Section 3 coincide with those defined in Section 6.4. • When M is Kähler and Ric(M ) < 0, Proposition 6.3 shows that the functional Vol J defines a Kahler structure ω J on P. Proposition 6.4 also applies, showing that the critical points of Vol J can be interpreted as the zero set of a moment map. Remark A theorem of Bruhat and Whitney [18] shows that any real analytic L can be "complexified", i.e. embedded as a totally real submanifold into an appropriate complex manifold (M, J). It thus defines a space P. It follows that the corresponding group Diff(L) admits an infinitesimal complexification even though it may not admit a genuine complexification [10]. A special case of the above occurs when M is negative Kähler-Einstein: in this case, using Proposition 5.11, we obtain a reformulation of minimal Lagrangians in terms of the zero set of a moment map. The analogies with the theory of cscK metrics and Hermitian-Einstein connections lead to the following question, which seems worthy of further pursuit. Question Can the existence of minimal Lagrangians in negative KE manifolds be related to a stability condition concerning (M, g, J,ω) and the chosen homotopy class T ? In [12] we study a different existence question: given a minimal Lagrangian for a negative KE metric on M , we prove the existence of minimal Lagrangians with respect to small KE perturbations of that metric. Concerning uniqueness, one can again formulate several different questions depending on the set of submanifolds one chooses to work with. As seen in Corollary 5.12, a minimal Lagrangian L 0 is unique within the set of totally real submanifolds which can be connnected to L 0 via a geodesic ray: in some sense this is a global statement, but of course it is of interest only in the presence of a good existence theory for geodesics. In [12] we discuss the question of local uniqueness, i.e. within a neighbourhood of a minimal Lagrangian L 0 in T . Existence and uniqueness conjectures in the context of Fukaya categories are formulated in [8].
2019-04-01T13:02:23.229Z
2015-06-15T00:00:00.000
{ "year": 2015, "sha1": "77d8df55eb756539ee425c9a7ac070828d7f2eaf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1506.04630", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "989e00681f27e9c8098f4ceeeeadc9d37923b140", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
220059811
pes2o/s2orc
v3-fos-license
Characterization and Analysis of Metal Adhesion to Parylene Polymer Substrate Using Scotch Tape Test for Peripheral Neural Probe This paper presents measurement and FEM (Finite Element Method) analysis of metal adhesion force to a parylene substrate for implantable neural probe. A test device composed of 300 nm-thick gold and 30 nm-thick titanium metal electrodes on top of parylene substrate was prepared. The metal electrodes suffer from delamination during wet metal patterning process; thus, CF4 plasma treatment was applied to the parylene substrate before metal deposition. The two thin film metal layers were deposited by e-beam evaporation process. Metal electrodes had 200 μm in width, 300 μm spacing between the metal lines, and 5 mm length as the neural probe. Adhesion force of the metal lines to parylene substrate was measured with scotch tape test. Angle between the scotch tape and the test device substrate changed from 60° to 90° during characterization. Force exerted the scotch tape was recorded as the function of displacement of the scotch tape. It was found that a peak was created in measured force-displacement curve due to metal delamination. Metal adhesion was estimated 1.3 J/m2 by referring to the force peak and metal width at the force-displacement curve. Besides, the scotch tape test was simulated to comprehend delamination behavior of the test through FEM modeling. Introduction Many research efforts have been made to develop and improve of the prosthetic hands and arms for the amputees, and, in recent years, much progress has been observed in the development of life-like robotic hands and the means of controlling them with greater degree of freedom. For this purpose, micro-electro-mechanical systems (MEMS) technologies have been used to fabricate neural interface probes [1][2][3][4][5][6]. However, existing MEMS-based neural electrodes would have a limitation on the neural interface due to its material characteristics. Thus, flexible neural electrodes have been recently proposed to minimize mechanical mismatch between the electrode and tissue after the electrode's implantation for a stable long-term recording and stimulation. To this sense, peripheral neural interface (PNI) devices have appeared to retrieve and send neural signals directly from and to the residual or existing peripheral nerves in this field [7]. Recently, thin film flexible polymeric devices are being used for measuring nerve impulse from the central or peripheral nerve systems [8][9][10][11][12][13]. Such flexible polymeric devices tend to be designed several µm in thickness and a few mm in length due to its nature of interfacing with neurons in human body. Consequently, metal electrodes on such a thin and long polymeric substrate have constraints to be at best several hundred nanometers. Thus, metallization Figure 1 shows the schematic of test pattern; it has length of 50 mm and width of 5 mm. It consists of 6 metal lines; each metal line has 200 µm in width and is spaced at 300 µm as is the design of multi-channel neural probe. Test pattern fabrication process is shown in Figure 2; (a) A parylene (parylene-C) layer having 5 µm in thickness was deposited in a 4-inch Si wafer using by commercial parylene coater system (VPC-500, Paco Engineering, Incheon, Korea). The monomer was deposited on the surface of the silicon wafer at a vapor phase condition with 0.8 µm/min, and deposition temperature was 20 • C. (b) Before metal layers deposition, the parylene surface was etched with CF 4 gas (process conditions; (25 mTorr, 20 sccm, 1.5 min)) without O 2 in order to increase surface roughness reducing the hydrophobicity, as shown in Figure 3. Note that O 2 was not used for surface modification since the parylene is susceptible to be surface oxidation causing degradation of mechanical properties of the polymer neural probe. Moreover, the nanopillar structures on parylene substrate was efficiently built with CF 4 gas rather than O 2 gas, as shown in Figure 3b,c. Besides, there have been two major methods to improve interfacial adhesion between parylene and metal layer: chemical treatment or mechanical surface modification with RIE (Reactive Ion Etching) etch (Plasma-therm 790 MF, Plasma-Therm, Saint Petersburg, FL, USA). Concerning the test sample used in our study, parylene was already deposited on a silicon substrate; thus, chemical treatment may have changed the properties of parylene substrate itself, as well as the interface on which the metal was deposited. Furthermore, RIE etch have shown better performance compared with conventional A-174 saline chemical treatment [15]. Therefore, we modified the parylene surface with CF 4 plasma etch to make a nanopillared surface, increasing the interfacial energy. The effectiveness of the nanopillared parylene surface was confirmed during metal patterning step. Titanium (30 nm) and gold (300 nm) were sequentially deposited with using by E-beam evaporator (ei5, ULVAC, Methuen MA, USA) on the parylene substrate without rupture of vacuum. (c) Photoresist (AZ GXR 601 (46cp), Merck, Kenilworth, NJ, USA) was patterned as a metal etch mask. The process conditions are summarized in Table 1 Resultant test samples on 4-inch silicon wafer is shown in Figure 4. All of metal lines were successfully implemented on the parylene substrate without any delamination during etching process. Note that the metal lines were fully delaminated from the parylene substrate during metal wet-etch step. Resultant test samples on 4-inch silicon wafer is shown in Figure 4. All of metal lines were successfully implemented on the parylene substrate without any delamination during etching process. Note that the metal lines were fully delaminated from the parylene substrate during metal wet-etch step. Scotch Tape Test for Metal Adhesion to Parylene Substrate The prepared test samples underwent the scotch tape test to evaluate adhesion strength of the metal electrodes to the parylene substrate. The machine used for the scotch tape test was Shimadzu EZ-S machine (Shimadzu, Kyoto, Japan) dedicated for tensile testing. Figure 5 shows a photo of scotch tape attached on the test sample and schematics of the scotch tape test, respectively. The scotch tape is 3 M transparent tape 550. It has thickness of 50 µm and 12 mm width, and it provides adhesion to steel of 1.8 N/cm (or 0.18 N/mm). Scotch Tape Test for Metal Adhesion to Parylene Substrate The prepared test samples underwent the scotch tape test to evaluate adhesion strength of the metal electrodes to the parylene substrate. The machine used for the scotch tape test was Shimadzu EZ-S machine (Shimadzu, Kyoto, Japan) dedicated for tensile testing. Figure 5 shows a photo of scotch tape attached on the test sample and schematics of the scotch tape test, respectively. The scotch tape is 3 M transparent tape 550. It has thickness of 50 µm and 12 mm width, and it provides adhesion to steel of 1.8 N/cm (or 0.18 N/mm). Resultant test samples on 4-inch silicon wafer is shown in Figure 4. All of metal lines were successfully implemented on the parylene substrate without any delamination during etching process. Note that the metal lines were fully delaminated from the parylene substrate during metal wet-etch step. Scotch Tape Test for Metal Adhesion to Parylene Substrate The prepared test samples underwent the scotch tape test to evaluate adhesion strength of the metal electrodes to the parylene substrate. The machine used for the scotch tape test was Shimadzu EZ-S machine (Shimadzu, Kyoto, Japan) dedicated for tensile testing. Figure 5 shows a photo of scotch tape attached on the test sample and schematics of the scotch tape test, respectively. The scotch tape is 3 M transparent tape 550. It has thickness of 50 µm and 12 mm width, and it provides adhesion to steel of 1.8 N/cm (or 0.18 N/mm). (a) test setup (b) scotch tape test schematic Figure 5. Scotch tape test for metal adhesion to parylene substrates. Figure 5. Scotch tape test for metal adhesion to parylene substrates. Micromachines 2020, 11, 605 5 of 12 Referring to Figure 5b, the scotch tape test was carried out in the following way; the machine applies stroke (unit: mm/min) into one end of the scotch tape, and then it measures force F z (unit: N). As force of interest is F θ , relationship between F z and F θ can be calculated as Equation (1). During the scotch tape test, the angle (90 • −θ) was changed from 60 • to 90 • ; thus, F θ = 0.5 F z at 45 • , and F θ = F z at 90 • . For simplicity, we used the measured F z from the scotch tape test to extract the adhesion strength. Metal adhesion to parylene substrate was then measured with the scotch tape test. Scotch tape was attached to the parylene surface, slightly away from the left-end metal line to the right-end of the metal. After that, two different strokes (10 mm/min and 1 mm/min) were applied to the scotch tape, and corresponding force was measured as shown in Figure 6. All metal lines were debonded from the parylene substrate for all the cases. It was found that the sample with CF 4 treatment needs more force than that without CF 4 treatment, which means a parylene surface with CF 4 treatment sticks better to scotch tape. This is a proof that parylene surface energy can be increased with only CF 4 treatment without O 2 . Micromachines 2020, 11, x 5 of 13 Referring to Figure 5b, the scotch tape test was carried out in the following way; the machine applies stroke (unit: mm/min) into one end of the scotch tape, and then it measures force Fz (unit: N). As force of interest is Fθ, relationship between Fz and Fθ can be calculated as Equation (1). . (1) During the scotch tape test, the angle (90°−θ) was changed from 60° to 90°; thus, Fθ = 0.5 Fz at 45°, and Fθ = Fz at 90°. For simplicity, we used the measured Fz from the scotch tape test to extract the adhesion strength. Metal adhesion to parylene substrate was then measured with the scotch tape test. Scotch tape was attached to the parylene surface, slightly away from the left-end metal line to the right-end of the metal. After that, two different strokes (10 mm/min and 1 mm/min) were applied to the scotch tape, and corresponding force was measured as shown in Figure 6. All metal lines were debonded from the parylene substrate for all the cases. It was found that the sample with CF4 treatment needs more force than that without CF4 treatment, which means a parylene surface with CF4 treatment sticks better to scotch tape. This is a proof that parylene surface energy can be increased with only CF4 treatment without O2. The first peak in each measured force was due to initiation of metal debonding, which makes abrupt drop of force. Minimal adhesion force is found when a stroke of 1 mm/min was applied to the parylene test sample without CF4 treatment (green line). It can be said that the metal adhesion had lower than the adhesion value estimated from the first peak. The adhesion can be calculated as follows; (0.5 N/12 mm) × (1.2 mm/12 mm) = 4.2 N/m. As all metal lines were debonded, the metal adhesion should have had lower than 4.2 J/m 2 . Thus, a lower stoke of 0.1 mm/min was applied to find metal adhesion to the parylene substrate. In this case, the scotch tape was attached only to the narrow metal lines of CF4 treated parylene and then force was recoded while stroke of 0.1 mm/min is applied. The measured force-displacement curve was compared with the previous results of 1 mm/min and 10 mm/min, as shown in Figure 7. As remarked in Figure 7, the scotch tape was debonded up to 0.9 N without metal line delamination, and one metal line started to debond from 0.91 N. Therefore, metal adhesion could be extracted from this peak force; (0.91 N/12mm) × (0.2 mm/12 mm) = (76 N/m) × (0.017) = 1.29 N/m = 1.29 J/m 2 . Note that inset shows transferred metal lines on the scotch tape. Scotch tape strokes of 1 mm/min and 10 mm/min introduced large force fluctuation, which would result from relatively large applied force compared with interfacial energy. The first peak in each measured force was due to initiation of metal debonding, which makes abrupt drop of force. Minimal adhesion force is found when a stroke of 1 mm/min was applied to the parylene test sample without CF 4 treatment (green line). It can be said that the metal adhesion had lower than the adhesion value estimated from the first peak. The adhesion can be calculated as follows; (0.5 N/12 mm) × (1.2 mm/12 mm) = 4.2 N/m. As all metal lines were debonded, the metal adhesion should have had lower than 4.2 J/m 2 . Thus, a lower stoke of 0.1 mm/min was applied to find metal adhesion to the parylene substrate. In this case, the scotch tape was attached only to the narrow metal lines of CF 4 treated parylene and then force was recoded while stroke of 0.1 mm/min is applied. The measured force-displacement curve was compared with the previous results of 1 mm/min and 10 mm/min, as shown in Figure 7. As remarked in Figure 7, the scotch tape was debonded up to 0.9 N without metal line delamination, and one metal line started to debond from 0.91 N. Therefore, metal adhesion could be extracted from this peak force; (0.91 N/12 mm) × (0.2 mm/12 mm) = (76 N/m) × (0.017) = 1.29 N/m = 1.29 J/m 2 . Note that inset shows transferred metal lines on the scotch tape. Scotch tape strokes of 1 mm/min and 10 mm/min introduced large force fluctuation, which would result from relatively large applied force compared with interfacial energy. FEM Modeling and Simulation FEM modeling and simulation is very useful to understand stress effect and corresponding deformation of MEMS package, debonding characteristics of a transfer packaging, and mechanical behaviors related with delamination [23][24][25][26][27]. Especially, debonding of a substrate and film delamination can be studied by adopting a CZM (Cohesive Zone model) to represent the interface of interest [28][29][30]. For FEM modeling, material properties of each element are important to get good simulation results. Required material properties in this modeling are Young's moduli and poisson ratios of scotch tape and parylene and strain energy release rate of interface between the scotch tape and parylene. Young's modulus of the scotch tape is extracted from tensile test result, as shown in Figure 8. The Young's modulus of the scotch tape is 6.9 MPa in the elastic region, and the maximum applied force in the elastic region is 7.6 N when the applied strain is 2.2% (2.2 mm elongation as test scotch tape length is 100 mm). From the tensile test result, scotch tape in the metal adhesion test would be in the elastic region as the applied force is less than 2 N in all the cases. Poisson ratio of the scotch tape is assumed to be 0.4 as other polymer materials. Young's modulus and poisson ratio of parylene are 2.67 GPa, as extracted in previous work, and 0.4, respectively [23]. Table 3 summarizes material properties for the FEM model. Note that interface material properties were defined with critical strain energy release rate. The value for critical strain energy release rate was the measured adhesion force as explained earlier. FEM Modeling and Simulation FEM modeling and simulation is very useful to understand stress effect and corresponding deformation of MEMS package, debonding characteristics of a transfer packaging, and mechanical behaviors related with delamination [23][24][25][26][27]. Especially, debonding of a substrate and film delamination can be studied by adopting a CZM (Cohesive Zone model) to represent the interface of interest [28][29][30]. For FEM modeling, material properties of each element are important to get good simulation results. Required material properties in this modeling are Young's moduli and poisson ratios of scotch tape and parylene and strain energy release rate of interface between the scotch tape and parylene. Young's modulus of the scotch tape is extracted from tensile test result, as shown in Figure 8. The Young's modulus of the scotch tape is 6.9 MPa in the elastic region, and the maximum applied force in the elastic region is 7.6 N when the applied strain is 2.2% (2.2 mm elongation as test scotch tape length is 100 mm). From the tensile test result, scotch tape in the metal adhesion test would be in the elastic region as the applied force is less than 2 N in all the cases. Poisson ratio of the scotch tape is assumed to be 0.4 as other polymer materials. Young's modulus and poisson ratio of parylene are 2.67 GPa, as extracted in previous work, and 0.4, respectively [23]. Table 3 summarizes material properties for the FEM model. Note that interface material properties were defined with critical strain energy release rate. The value for critical strain energy release rate was the measured adhesion force as explained earlier. Given with material properties, modeling and simulation of the scotch tape test was performed in two steps: 1) crack propagation behavior of the interface between the scotch tape and parylene substrate 2) debonding of the scotch tape from parylene substrate based on CZM. A 2D FEM model for crack propagation was built, as shown in Figure 9a. The length of this model was 1000 µm, and thickness was 50 µm for scotch tape and 5 µm for parylene polymer. The following boundary conditions were applied: bottom line is fixed and displacement load is applied to left-top end, having 50 µm width. Note that 2D model behavior was defined as plane strain. Total deformation of the model when displacement of 100 µm in y direction was applied to the scotch tape is presented in Figure 9b. As is in the scotch tape test, delamination of the interface between scotch tape and parylene occurred, and crack propagated in x-direction. model was 1000 µm, and thickness was 50 µm for scotch tape and 5 µm for parylene polymer. The following boundary conditions were applied: bottom line is fixed and displacement load is applied to left-top end, having 50 µm width. Note that 2D model behavior was defined as plane strain. Total deformation of the model when displacement of 100 µm in y direction was applied to the scotch tape is presented in Figure 9b. As is in the scotch tape test, delamination of the interface between scotch tape and parylene occurred, and crack propagated in x-direction. Force-displacement was investigated as function of interface adhesion energy, as shown in Figure 9c. The required force for crack initiation was increased as interface adhesion energy increased, as expected. The force magnitude smaller than the measurement would have been due to thickness effect in 2D simulation. An important parameter in this graph is minimal displacements for crack initiation: 5 µm, 6.7 µm, and 7.6 µm for 1.3 N/m 2 , 3.0 N/m 2 , and 5.0 N/m 2 , respectively. These parameters are included in the following 3D interface delamination as a part of CZM parameters. From the simulation results, strain energy release rate (SERR) for mode 1 referring to VCCT (Virtual Crack Closure Technique) (G1), SERR for mode 2 from VCCT (G2), and SERR for mode 3 from VCCT (G3) were found 0.5 J/m 2 , 0.75 J/m 2 and 0 J/m 2 , respectively. The total amount of VCCTs corresponded to the interface energy of 1.3 N/m 2 . Principal modes of the fracture of the delamination was from mode 1 and mode 2. Figure 10b, there were two different regions in this model: pre-cracked (interface I) and CZM-modeled (interface II). CZM is a useful way to simulate interface delamination, which is frequently used for thin film delamination and transfer packaging technique [24,25]. The interface II, which is of interest for the adhesion, is modeled with CZM (Cohesive Zone Model) parameters, as explained in Figure 10c. As the bilinear CZM model needs at least two parameters, maximum normal traction and normal displacement at debonding was defined, as presented in Table 4. The minimal gaps for the fracture initiation found from the previous crack propagation simulation were included for 3D CZM simulation to estimate applied force to initiate the interface crack. Force-displacement was investigated as function of interface adhesion energy, as shown in Figure 9c. The required force for crack initiation was increased as interface adhesion energy increased, as expected. The force magnitude smaller than the measurement would have been due to thickness effect Micromachines 2020, 11, 605 9 of 12 in 2D simulation. An important parameter in this graph is minimal displacements for crack initiation: 5 µm, 6.7 µm, and 7.6 µm for 1.3 N/m 2 , 3.0 N/m 2 , and 5.0 N/m 2 , respectively. These parameters are included in the following 3D interface delamination as a part of CZM parameters. From the simulation results, strain energy release rate (SERR) for mode 1 referring to VCCT (Virtual Crack Closure Technique) (G1), SERR for mode 2 from VCCT (G2), and SERR for mode 3 from VCCT (G3) were found 0.5 J/m 2 , 0.75 J/m 2 and 0 J/m 2 , respectively. The total amount of VCCTs corresponded to the interface energy of 1.3 N/m 2 . Principal modes of the fracture of the delamination was from mode 1 and mode 2. Figure 10a shows 3D model for the CZM interface delamination. As indicated in Figure 10b, there were two different regions in this model: pre-cracked (interface I) and CZM-modeled (interface II). CZM is a useful way to simulate interface delamination, which is frequently used for thin film delamination and transfer packaging technique [24,25]. The interface II, which is of interest for the adhesion, is modeled with CZM (Cohesive Zone Model) parameters, as explained in Figure 10c. As the bilinear CZM model needs at least two parameters, maximum normal traction and normal displacement at debonding was defined, as presented in Table 4. The minimal gaps for the fracture initiation found from the previous crack propagation simulation were included for 3D CZM simulation to estimate applied force to initiate the interface crack. Parameter Name Value Maximum normal traction 0.5 MPa Normal displacement jump at completion of debonding 5 µm Maximum tangential traction 0.5 MPa Tangential displacement jump at completion of debonding 5 µm A displacement load was applied to one-end of the scotch tape, and then the force-displacement was extracted from the simulation. Referring to bilinear CZM model, critical strain energy release rate was calculated 1.25 J/m 2 . Initial width of the 3D model was 200 µm as was the fabricated metal electrode width. As in the 2D case, displacement load was applied to left-tip end. Extracted force-displacement curve at the loading place is presented in Figure 11. Minimal force for debonding of scotch tape was estimated 1.2 N/m, while measured one was 1.29 N/m. Adhesion force of the simulation had a good agreement with the measurement. Width of metal electrode could have been increased to get larger interface adhesion, as shown in Figure 11 When wider metal electrode is used to achieve larger metal-parylene adhesion, metal line impedance for neural signal acquisition should be taken into account. Figure 10b, there were two different regions in this model: pre-cracked (interface I) and CZM-modeled (interface II). CZM is a useful way to simulate interface delamination, which is frequently used for thin film delamination and transfer packaging technique [24,25]. The interface II, which is of interest for the adhesion, is modeled with CZM (Cohesive Zone Model) parameters, as explained in Figure 10c. As the bilinear CZM model needs at least two parameters, maximum normal traction and normal displacement at debonding was defined, as presented in Table 4. The minimal gaps for the fracture initiation found from the previous crack propagation simulation were included for 3D CZM simulation to estimate applied force to initiate the interface crack. Parameter Name Value Maximum normal traction 0.5 MPa Normal displacement jump at completion of debonding 5 µm Maximum tangential traction 0.5 MPa Tangential displacement jump at completion of debonding 5 µm A displacement load was applied to one-end of the scotch tape, and then the force-displacement was extracted from the simulation. Referring to bilinear CZM model, critical strain energy release rate was calculated 1.25 J/m 2 . Initial width of the 3D model was 200 µm as was the fabricated metal Micromachines 2020, 11, x 11 of 13 electrode width. As in the 2D case, displacement load was applied to left-tip end. Extracted forcedisplacement curve at the loading place is presented in Figure 11. Minimal force for debonding of scotch tape was estimated 1.2 N/m, while measured one was 1.29 N/m. Adhesion force of the simulation had a good agreement with the measurement. Width of metal electrode could have been increased to get larger interface adhesion, as shown in Figure 11 When wider metal electrode is used to achieve larger metal-parylene adhesion, metal line impedance for neural signal acquisition should be taken into account. Figure 11. Force-displacement curves as function of metal width. Conclusions Thin film flexible polymeric devices, such as a parylene-metal-parylene system, are being used for measuring nerve impulse from the central or peripheral nerve systems. Such thin-film polymeric devices provide advantages of flexibility and biocompatibility, but they are prone to delamination and carry concerns about their mechanical robustness. Therefore, metal adhesion strength to polymer Figure 11. Force-displacement curves as function of metal width. Conclusions Thin film flexible polymeric devices, such as a parylene-metal-parylene system, are being used for measuring nerve impulse from the central or peripheral nerve systems. Such thin-film polymeric devices provide advantages of flexibility and biocompatibility, but they are prone to delamination and carry concerns about their mechanical robustness. Therefore, metal adhesion strength to polymer substrate is important. The adhesion of metal electrodes to parylene substrate was measured by the scotch tape test. Thin and long metal electrodes was patterned on a parylene substrate in which the surface was modified by CF 4 plasma etch before the metal deposition through e-beam evaporation. Metal adhesion strength was estimated by measuring force-displacement curve of the scotch tape test. The estimated metal adhesion was 1.3 J/m 2 . Experiment result was verified through FEM modeling of the scotch tape test. The proposed modeling method provided adhesion force having good agreement with experimental result. Although a thin-film parylene-based device can provide excellent short-term reliability, there exists one significant drawback of poor adhesion to metallic layer. The failure of the metal electrode on the parylene substrate is accelerated in the wet environment of a human body and under mechanical forces originating from body movement. Therefore, mechanical integrity in conditions of a human body implant or movement will be performed to assess long-term reliability of the parylene-metal devices, along with the biocompatibility of the parylene-base neural probe.
2020-06-25T09:09:26.153Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "1f65e7f94259fd68e33ef6cd0c8d54adbd47da89", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/11/6/605/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "faed80d7a7bb9e7c9c5eaff11247d74818818157", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
259178734
pes2o/s2orc
v3-fos-license
Inequalities for the Coefficients of Schwarz Functions The relation between a considered family of analytic functions and the class P\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {P}}$$\end{document} of functions with a positive real part is one of the main tools used in solving various extremal problems, among others coefficient problems. Another approach can be useful in solving such tasks. This approach is to exploit the correspondence between a considered family and the family B0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}_0$$\end{document} of bounded analytic functions ω\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega $$\end{document} such that ω(0)=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega (0)=0$$\end{document}. Such functions appear in the well-known Schwarz lemma, so they are called Schwarz functions. In the literature, there are numerous coefficient functionals discussed for functions in P\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {P}}$$\end{document}. On the other hand, relative functionals for functions in B0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}_0$$\end{document} are not so commonly studied. Consequently, we do not know so much about coefficient inequalities for Schwarz functions. We shall fill the gap to some extent considering two types of functionals. The first one is a Zalcman-type functional cn-ckcn-k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c_{n}-c_{k}c_{n-k}$$\end{document}; the other one is the Hankel determinant cn-1cn+1-cn2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c_{n-1}c_{n+1}-c_{n}{}^2$$\end{document}. For these functionals, bounds with respect to a fixed first coefficient c1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c_1$$\end{document} (or a few initial coefficients) are obtained. Some generalizations of these functionals are also given. All results are sharp. written as a power series ω(z) = ∞ n=1 c n z n z ∈ D. (1) Denote by P the class of functions analytic in D, given by p(z) = 1 + ∞ n=1 p n z n (2) and having a positive real part. It is clear that if then p ∈ P if and only if ω ∈ B 0 . This property makes it possible to discuss problems in B 0 considering the class P and vice versa. Further, we apply this property to establish a relation between the initial coefficients of ω ∈ B 0 and p ∈ P. From the Schwarz-Pick lemma, it follows that for ω ∈ B 0 of the form (1), This inequality can be easily improved as follows. For any μ ∈ C, Carlson in [2] obtained another generalization of the Schwarz-Pick lemma. In fact, he established some inequalities for the set B of functions bounded by 1 (the assumption ω(0) = 0 is not necessarily satisfied). Here, we adapt these inequalities for the class B 0 (for details, see [8]). Although the majority of our results will be derived with the use of the theorems given above, in the proof of Theorem 20 we apply a different approach. We express the initial coefficients of a Schwarz function ω ∈ B 0 by the corresponding coefficients of a function with a positive real part p ∈ P. Let p(z) and ω(z) be of the form (2) and (1), respectively. Comparing coefficients at powers of z in we obtain Consequently, we need the following lemma (see, [4]). Throughout the paper, we assume that the first coefficient of ω ∈ B 0 is a nonnegative real number. Consequently, we assume that c 1 = c ∈ [0, 1]. This assumption does not restrict the generality of our consideration because for any ϕ ∈ R A suitable choice of ϕ makes c 1 being real and greater than or equal to 0. Zalcman Functionals For a given ω ∈ B 0 of the form (1), consider the functional (ω) = c n − c k c n−k . A related functional ( f ) = a n − a k a n−k+1 defined for an analytic function is called a general Zalcman functional. Its classical version 0 ( f ) = a 2n−1 − a n 2 appeared in the late 1960 s and was connected with the famous Zalcman conjecture for analytic univalent functions in S. Zalcman conjectured (see, [1]) that |a 2n−1 − a n 2 | ≤ (n − 1) 2 for f ∈ S and n ≥ 2. This conjecture was verified for S and n = 2, 3, 4, 5, 6 as well as for many other subclasses of S. For a function p ∈ P of the form (2), an analogous functional is defined by ( p) = p n − p k p n−k . It was Livingston who proved in [5] that | p n − p k p n−k | ≤ 2 for p ∈ P and 0 ≤ k ≤ n. Theorem 4 If ω ∈ B 0 is given by (1), then the following sharp inequality holds for all Equality holds for each ω(z) = z j , j ∈ N, 2 ≤ j ≤ n. Consequently, we have Proof of Theorem 4 Applying Theorem 2 for This results in Theorem 6 If ω ∈ B 0 is given by (1), then the following sharp inequalities hold for all n ∈ N and μ ∈ R n j=2 and Equalities hold for each ω(z) = z j , j ∈ N, 2 ≤ j ≤ n. Observe that (15) is a generalization of (4). The application of the same method as in the proof of Theorem 4, but with the choice of λ n = 1, λ n−k = −c k and λ i = 0 for all i = k where an integer k is chosen to satisfy 2 ≤ k < n, leads to This results in Theorem 7 If ω ∈ B 0 is given by (1), then the following sharp inequality holds for all n, k ∈ N such that 2 ≤ k < n. Equality holds for each Consequently, Corollary 8 If ω ∈ B 0 is given by (1), then is true for all n, k ∈ N and 2 ≤ k < n. Taking k = n − 1 results in the following improvement in (13). Corollary 9 If ω ∈ B 0 is given by (1), then is true for all n ∈ N, n ≥ 3. Remark 10 A slight modification of the proof of Theorem 7 yields and Interestingly, this inequality can be improved by applying Carlson's theorem. We can write This means that holds for all ω ∈ B 0 given by (1) and all integers m ≥ 2. From Corollary 9, we know that for ω ∈ B 0 given by (1) This inequality can be slightly improved if we apply Carlson's theorem once again. Theorem 11 If ω ∈ B 0 is given by (1) and c = c 1 ∈ [0, 1], then the following inequality holds Inequality (24) is sharp for c = 0 and c ∈ [ 2 3 , 1]. In the first case, the extremal function is ω(z) = z 3 . In the other, the extremal function is given by Comparing two bounds for |c 3 − c 1 c 2 |, i.e., the bound in (24) and √ 1 − c 2 which follows from (19), we can see that the first bound is better and the equality in [0, 1] holds only for c = 0, c = √ 2/2 and c = 1. Proof of Theorem 11 From the triangle inequality and Theorem 1, where the set of variability of (|c 1 |, For a fixed x ∈ [0, 1], the function h(·, y) is increasing for y < 1 2 x(1 + x) and decreasing for y > 1 2 x(1 + x). Hence, This results in (24). In the same way, we can prove what follows. Hankel Determinants For a given analytic function f of the form (12), we define the second Hankel determinant as In recent years, the second Hankel determinant has been widely discussed for various subclasses of S as well as for some subclasses of non-univalent functions. The research mainly focused on H 2 (2) (for numerous result, see [7]). It is worth noting that the sharp bound of a 2 a 4 − a 3 2 for the whole class S is still not known. In this section, we derive the sharp bounds of H 2 (n) for ω ∈ B 0 and n ∈ N. with equality for the function defined by (25) Proof From the triangle inequality and from (3), Remark 15 The same bound can be obtained replacing |c 1 c 3 − c 2 2 | by |c 1 c 3 + c 2 2 |. In this case, the bound is not sharp for all c ∈ [0, 1], but for c = 0 and c = 1. The extremal functions are ω(z) = z 2 and ω(z) = z, respectively. A slight modification of the above proof leads to a more general version of Theorem 14. In Theorem 16, so consequently in Corollary 17, the equality holds for a function given by (25). Now, let us turn to the estimation of c n−1 c n+1 − c 2 n . Applying (7), we are able to obtain the following general result. Theorem 18 If ω ∈ B 0 is given by (1) and c = c 1 ∈ [0, 1], then for all n ∈ N, n ≥ 3, Proof Inequality (7) applied with N = n + 1 implies Consequently, By omitting the last two components which are non-positive, we have which results in (31) with the equality for ω(z) = z and ω(z) = z n . Furthermore, we can generalize the inequality in (31) in two directions. For the first one, let k, m, n be integers greater than 1 and k < n, m < n and let N = min{k, m}. Then, The equality holds for ω(z) = z j , j = 1, 2, . . . , N or j = n. To obtain the other generalization, we discuss two cases. Assume that μ ∈ [0, 1]. Hence, Let now μ ≥ 1. We have Finally, observe that Theorem 18 is still valid if we replace c n−1 c n+1 −c 2 n by c n−1 c n+1 + c 2 n . Combining information given above, we have For the rotation of a function given by (36), there is It is clear that for n ≥ 3 and we have This suggests that the exact bound of |c n−1 c n+1 −c 2 n | for all n ≥ 3 is equal to (1−c 2 ) 2 . Now, we shall estimate the sum At the beginning, we prove the following lemma. Conclusions From the results proved in two previous sections, we can observe that for Schwarz functions given by (1) we have three similar inequalities valid for all integers n ≥ 2. The first one is the inequality n j=1 |c j | 2 ≤ 1. Data Availability Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. Declarations Conflict of interest Author has no relevant financial or non-financial interests to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2023-06-17T13:41:48.495Z
2023-06-17T00:00:00.000
{ "year": 2023, "sha1": "06d96f930a180b3b1b91aedae3060a50a4b9213d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40840-023-01538-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "06d96f930a180b3b1b91aedae3060a50a4b9213d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
39768159
pes2o/s2orc
v3-fos-license
Insulin-like signaling and the neural circuit for integrative behavior in C. elegans. Caenorhabditis elegans exhibits a food-associated behavior that is modulated by the past cultivation temperature. Mutations in INS-1, the homolog of human insulin, caused the defect in this integrative behavior. Mutations in DAF-2/insulin receptor and AGE-1/phosphatidylinositol 3 (PI-3)-kinase partially suppressed the defect of ins-1 mutants, and a mutation in DAF-16, a forkhead-type transcriptional factor, caused a weak defect. In addition, mutations in the secretory protein HEN-1 showed synergistic effects with INS-1. Expression of AGE-1 in any of the three interneurons, AIY, AIZ, or RIA, rescued the defect characteristic of age-1 mutants. Calcium imaging revealed that starvation induced INS-1-mediated down-regulation of AIZ activity. Our results suggest that INS-1, in cooperation with HEN-1, antagonizes the DAF-2 insulin-like signaling pathway to modulate interneuron activity required for food-associated integrative behavior. The secreted peptide hormone insulin modulates neural plasticity. Insulin and insulin receptors are expressed in several regions of the rat brain (Havrankova et al. 1978a,b), insulin receptors localize to post-synapses (Abbott et al. 1999), and insulin can produce long-term depression (LTD) of synaptic transmission through endocytosis of ␣-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors in rat hippocampal CA1 neurons (Man et al. 2000). In addition, Phosphatidylinositol 3 (PI-3)-kinase that functions in the insulin signaling pathway is thought to induce long-term potentiation (LTP) of synaptic transmission in the dentate gyrus of rat (Kelly and Lynch 2000). These alterations by proteins of the insulin signaling pathway may be involved in learning and memory, but what kind of behavior the insulin signaling pathway modulates has been largely unknown. The nematode Caenorhabditis elegans is well suited for the analysis of the molecular and cellular mechanisms underlying neural plasticity because of its accessible genetics, stereotyped behavioral responses, and its simple nervous system consisting of 302 neurons whose connections are entirely known (White et al. 1986). Recently, physiological analysis of the neural circuit in live worms has become possible by the use of cameleon, a genetically encodable calcium indicator, to measure Ca 2+ concentration changes (Miyawaki et al. 1997;Kimura et al. 2004). C. elegans exhibits thermotaxis, an integrative behavior in which well-fed animals in a thermal gradient are attracted to their cultivation temperature, whereas starved animals avoid it (Hedgecock and Russell 1975;Mohri et al. 2005;Rankin 2005). This food-associated behavioral plasticity, regarded the most complex behavior in C. elegans, is an ideal behavioral paradigm for comprehensive study of neural plasticity at the molecular, physiological, and behavioral levels. In this study, we show that in cooperation with a secreted protein HEN-1, an insulin homolog INS-1, and insulin-like signaling pathway modulate neuronal activity of interneurons to execute thermotaxis behavior in C. elegans. We suggest that a neuroendocrine system is important for modulating the neural circuit that underlies neural plasticity. Results and Discussion Insulin homolog INS-1 is required for food-associated temperature responsive behavior Wild-type animals cultivated at 17°C in well-fed conditions migrated to the center of the agar plate, which corresponded to their cultivation temperature ( Fig. 1A,C,K). By contrast, most of the wild-type animals cultivated at 17°C for 3 h in food-deprived conditions avoided their cultivation temperature (Fig. 1D,K). Similarly, wild-type animals cultivated at 25°C with food migrated to their cultivation temperature (Fig. 1G,L), whereas most avoided their cultivation temperature after being cultivated at 25°C without food for 1 h (Fig. 1H,L). These results indicate that C. elegans can associate cultivation temperature with feeding state and memorize this information (Hedgecock and Russell 1975;Mohri et al. 2005). Previous studies have identified the neural circuit and several genes required for thermotaxis (Mori and Ohshima 1995;Komatsu et al. 1996;Ishihara et al. 2002;Kuhara et al. 2002;Murakami et al. 2005;Okochi et al. 2005;Inada et al. 2006). However, how C. elegans executes thermotaxis remains to be understood at the molecular and physiological levels. The aho-2(nj32) mutant was isolated in our genetic screen for mutants defective in this integrative behavior (Mohri et al. 2005). aho-2(nj32) mutants always migrated to their cultivation temperature after starvation as well as after feeding (Fig. 1E,F,I-L); we designated this phenotype as an Abnormal hunger orientation (Aho) phenotype. The Aho phenotype of aho-2(nj32) mutants could result from a defect either in association between cultivation temperature and starvation or in recognition of starvation per se. To address these possibilities, we tested the responses of aho-2(nj32) animals to changes in feeding state using a locomotory activity assay (Sawin et al. 2000). Well-fed wildtype animals move more slowly in plates with food than without food, and starved wild-type animals move even more slowly in plates with food than do well-fed animals (Fig. 1M). aho-2(nj32) and wild-type animals exhibited nearly the same responses to changes in feeding state (Fig. 1M), suggesting that aho-2(nj32) animals can respond to starvation and exhibit a defect in association between cultivation temperature and starvation (Mohri et al. 2005). We investigated whether aho-2(nj32) mutants had a defect in integration of different chemosensory inputs using an interaction assay (Fig. 1B) that is a behavioral test for the integration of two opposing signals, a signal from an attractive odorant, diacetyl, and from a repulsive metal, Cu 2+ ion (Ishihara et al. 2002). HEN-1 is a secretory protein with an LDL receptor motif and is required for the behavioral task tested in this interaction assay (Ishihara et al. 2002). In contrast to hen-1(tm501)-null mutants, aho-2(nj32) mutants responded normally (Fig. 1N). aho-2(nj32) mutants exhibited normal responses to diacetyl and Cu 2+ ions separately (data not shown), implying that the integrative process of the two chemical compounds is normal in aho-2(nj32) mutants. The gene aho-2 was mapped to the 0.08 map unit region in the center of chromosome IV (Mohri et al. 2005; data not shown), which is covered by the three cosmids. We found that only F13B12 rescued the defect of aho-2(nj32) ( Fig. 2A). Among six predicted genes in the region covered by F13B12 (data not shown), a PCR product containing ins-1, the C. elegans gene most closely related to human insulin among 38 insulin-related genes (Pierce et al. 2001;Li et al. 2003), rescued the defect of the aho-2(nj32) mutant ( Fig. 2A). The ins-1(nr2091) mutant, a previously isolated putative null mutant (Pierce et al. 2001), also showed an Aho phenotype (Fig. 2C). We found a 130-base-pair (bp) deletion from the first exon to the second exon in ins-1 of aho-2(nj32) mutants (data not shown). These results led us to conclude that aho-2 is identical to ins-1. Neuronal expression of INS-1 is important for integrative thermotaxis behavior We observed ins-1-expressing cells using an ins-1 promoterϻGFP fusion gene. As previously reported, fluorescence was observed in many head neurons, including ADF, AIA, AIM, ASE, ASG, ASH, ASI, ASJ, AWA, BAG, and NSM, and was also observed in the intestine, hypodermis, and vulval muscle ( Fig. 2E; Pierce et al. 2001; data not shown). Expressing ins-1 cDNA from its own promoter or in all neurons using the unc-14 promoter almost fully rescued the Aho phenotype of ins-1(nr2091) mutants, whereas no rescue occurred when expressing the ins-1 cDNA in intestine using the ges-1 promoter (Fig. 2C). These results indicate that neuronal expression of INS-1 is sufficient to rescue the food-associated thermotactic behavior defect of ins-1(nr2091) mutants. We conducted cell-specific rescue experiments to determine if the expression of INS-1 in any particular neuron is required for association between temperature and feeding state. The expression of ins-1 cDNA using the ins-1, ncs-1, lin-11, unc-86, and ceh-14 promoters effectively rescued the defect of ins-1(nr2091) mutants, and weak rescue occurred upon ins-1 cDNA expression from osm-6 and gpa-2 promoters (Fig. 2D). In contrast, no rescue occurred upon expression of ins-1 cDNA using the unc-42, tph-1, gcy-8, glr-3, or Figure 1. aho-2(nj32) shows defects in food-associated thermotactic behavioral plasticity, normal response to starvation, and normal integration of two different chemosensory inputs. (A, top) Thermotaxis assay system. Single animals were placed on the agar at ∼22°C, as indicated by the cross. (Bottom) A thermograph shows the stable radial temperature gradient (17°C-25°C) of the dish surface. (B) Interaction assay system. (C-J) Tracks on a radial temperature gradient of individual animals cultivated at 17°C or 25°C with or without food. (K) Results of the thermotaxis assay for well-fed or starved animals cultivated at 17°C. Thermotaxis was evaluated using "Fraction of 17," which includes the class "17." The asterisks indicate p < 0.005, as identified by t-tests in a comparison of starved aho-2(nj32) mutants with starved wild-type animals. Fed animals, n Ն 30; starved animals, n Ն 60. Error bars indicate SEM. (L) Results of the thermotaxis assay for animals cultivated at 25°C. Thermotaxis was evaluated using "Fraction of 25," which includes the class "25." The asterisks indicate p < 0.005 in a comparison of aho-2(nj32) mutants with starved wild-type animals (n Ն 68 animals). Error bars indicate SEM. (M) Modulation of the locomotory rate. The number of body bends in 20 sec on the assay plates was scored. Gray and black bars indicate the results for well-fed animals transferred to assay plates without food (F > S) and with food (F > F), respectively. Hatched and white bars indicate the results for starved animals transferred to assay plates without food (S > S) and with food (S > F), respectively (n = 45 animals). Error bars indicate SD. (N) Wild-type and aho-2(nj32) animals were tested in the interaction assay, and the results are represented as Index (B/[A + B]). AIY promoter (Fig. 2D). Essentially, we did not identify any single neuron where the expression of INS-1 was required for the rescue, suggesting that INS-1 acts cell nonautonomously. Close examination of the rescue results and ins-1pϻGFP expression in neurons revealed that the defect of the ins-1 mutant was rescued when INS-1 was expressed in at least one of the neurons that appeared to express INS-1 normally (Fig. 2E). To address the possibility that the lack of rescue by several promoters was caused by either too low or too high expression of the ins-1 cDNA, we constructed ins-1 mutant strains transgenic with different concentrations of ges-1pϻins-1 cDNA or gcy-8pϻins-1 cDNA, and tested those strains for food-associated thermotactic responses. ges-1-or gcy-8-driven INS-1 expression in the ins-1(nr2091) strain did not significantly rescue the defect regardless of the concentration used ( Supplementary Fig. 1), suggesting that the inability of some of the promoters to rescue the defect was not caused by insufficient or excessive expression of the ins-1 cDNA. ins-1 may be regulated transcriptionally, at the hormone processing level, and/or at the level of secretion. We observed whether there was any difference in INS-1 expression between the fed state and the starved state using an ins-1 promoterϻGFP fusion gene and a rescuable ins-1ϻGFP fusion gene. Light microscopic observation failed to find the difference in expression level or localization of the fusion protein (data not shown). These results argue against transcriptional regulation of ins-1 expression. If INS-1 is regulated at the level of secretion in response to starvation, the concentration of extracellular INS-1 may be too low to detect with GFP. Overexpression of INS-1 in a wild-type background induced a partially abnormal thermotaxis phenotype (Fig. 2B), which is consistent with the model that the secretion of INS-1 could modulate thermotactic behavior. INS-1 antagonizes DAF-2 insulin-like signaling in food-associated thermotactic plasticity A previous report suggested that INS-1 antagonizes DAF-2 insulin-like signaling for dauer formation in C. elegans (Pierce et al. 2001). We explored whether DAF-2 insulin-like signaling also functions in food-associated thermotactic behavioral plasticity by examining mutants of daf-2, a homolog of the insulin/IGF-1 receptor (Kimura et al. 1997), age-1, a homolog of PI-3-kinase (Morris et al. 1996), and daf-16, a forkhead-type transcriptional factor (Lin et al. 1997). The daf-16(m26) mutant, which is a suppressor of both the daf-2 and the age-1 mutant in dauer formation (Gottlieb and Ruvkun 1994;Larsen et al. 1995), showed a weak Aho phenotype (Fig. 3A). The daf-2(e1368) and age-1(hx546) mutants, however, normally avoided their cultivation temperature after a 3-h starvation (Fig. 3A). If INS-1 antagonizes DAF-2 insulin-like signaling for this integrative behavior, the behavioral responses of daf-2(e1368) and age-1(hx546) mutants to starvation might be opposite to the response of ins-1(nr2091) mutants. To test this hypothesis, we conducted a time course assay for starvationinduced temperature avoidance. As the cultivation time under starvation conditions increased, the fraction of wild-type animals that migrated to the cultivation temperature gradually decreased (Fig. 3C). By contrast, the fraction of ins-1(nr2091) mutants that migrated to the cultivation temperature did not decrease much even after a 3-h starvation (Fig. 3C). Consistent with our hypothesis, age-1(hx546) mutants started to avoid the cultivation temperature much earlier than wild-type animals (Fig. 3C). These results indicate that age-1(hx546) could associate cultivation temperature with starvation quicker than wild-type animals. daf-2(e1368) mutants showed a response similar to wild-type animals (Fig. 3C), which might be due to the fact that daf-2(e1368) is one of the weakest alleles in dauer formation (Gems et al. 1998). Because of developmental or behavioral defects, we were unable to examine starvation-induced temperature avoidance of daf-2(mg43) and daf-2(e1370) mutants, both of which are stronger alleles than daf-2(e1368) in dauer formation (Kimura et al. 1997;Gems et al. 1998; data not shown). Double mutants were constructed and their food-associated thermotactic responses were tested to clarify the genetic interaction between the insulin-like signaling genes. With a 3-h starvation, both daf-2 and age-1 mutations partially suppressed the defective starvation-induced temperature avoidance of ins-1 mutants, although daf-2 did not suppress daf-16 (Fig. 3A). These results are consistent with a model that INS-1 acts antagonistically GENES & DEVELOPMENT 2957 Cold Spring Harbor Laboratory Press on July 20, 2018 -Published by genesdev.cshlp.org Downloaded from on DAF-2 insulin-like signaling for food-associated thermotaxis behavior. daf-16; ins-1 double mutants showed a stronger defect than daf-16 single mutants (Fig. 3B), suggesting that ins-1 is genetically downstream from daf-16, which is consistent with a feedback loop from daf-16 to ins-1. INS-1 and HEN-1 act coordinately in food-associated thermotactic plasticity Insulin-like signaling, in cooperation with the TGF-␤ and cyclic GMP pathways, is one of the important pathways for dauer formation. We tested whether dauer formation pathways other than the insulin-like signaling pathway are also involved in integrative behavior for temperature and feeding state. Dauer-defective (Daf-d) mutants for daf-5, daf-6, and daf-12, which encodes the TGF-␤ pathway member Sno/Ski (da Graca et al. 2004), a Patched-related protein that functions upstream of both the TGF-␤ pathway and the cyclic GMP pathway (Schackwitz et al. 1996;Perens and Shaham 2005), and a nuclear receptor (Antebi et al. 2000), respectively, did not show the defects (Fig. 3B). Recently, Murakami et al. (2005) showed that the secreted protein TGF-␤/DAF-7 is involved in memory acquisition of the cultivation temperature. However, daf-7(e1372) and daf-1(m40) mutants that had a deficit in one subunit of the TGF-␤ receptor did not show the defects, and neither the daf-1 nor daf-7 mutant suppressed the defect of ins-1 mutants (Fig. 3B). HEN-1, a secretory protein with an LDL receptor motif, is reported to be involved in food-associated thermotactic plasticity (Ishihara et al. 2002). To test for a genetic interaction between insulin-like signaling and HEN-1, we constructed double mutants. daf-2 mutation did not suppress the defect of hen-1 mutants. The ins-1; hen-1 double mutant, however, showed a stronger mutant phenotype than ins-1 or hen-1 single mutants (Fig. 3B). These results suggest that INS-1 and HEN-1 act in parallel and that insulin-like signaling and HEN-1 signaling are major components in the regulation of foodassociated thermotactic plasticity. Insulin-like signaling functions in thermotaxis interneurons Where is the target cell that receives and processes INS-1 to antagonize the DAF-2 insulin-like signaling pathway for temperature-starvation integrative behavior? To address this question, we conducted a cell-specific rescue experiment on age-1(hx546) mutants with a 30-min starvation. Expressing age-1 cDNA in all neurons using the unc-119 promoter rescued the quicker starvation-induced temperature avoidance defect of age-1(hx546) mutants (Fig. 3D). Expressing age-1 cDNA in several neurons, including the AIZ, AIY, or RIA interneurons, all of which are essential interneurons for thermotaxis (Mori and Ohshima 1995), almost fully rescued the quicker avoidance defect of age-1(hx546) mutants (cf. Figs. 3D and 2E; Brockie et al. 2001). By contrast, expressing age-1 cDNA in AFD thermosensory neurons by the gcy-8 promoter, in sensory neurons by the osm-6 promoter or in many neurons by the unc-42 promoter did not rescue the defect (cf. Figs. 3D and 2E). These results suggest that AGE-1 (and probably the insulin-like signaling pathway) functions in thermotaxis interneurons for food-associated neural plasticity. Calcium imaging of the thermotaxis interneurons To analyze a physiological aspect of the insulin-like signaling in integrative behavior for food and temperature, we observed the changes in neuronal activity of the AIZ thermotaxis interneuron of live animals by measuring stimulus-evoked Ca 2+ concentration changes using cameleon, a genetically encodable calcium indicator (Miyawaki et al. 1997;Kimura et al. 2004). The activity of the AIZ interneuron in wild-type animals cultivated at 17°C with food increased with warming and decreased with cooling, whereas the activity of the AIZ interneuron in the starved wild-type animals was much less responsive to temperature changes ( Fig. 4A; Kuhara and Mori 2006). The AIZ interneuron of starved ins-1(nr2091) animals was as active as that of fed ins-1(nr2091) animals (Fig. 4B). These results suggest that INS-1 is required for the starvation-induced negative regulation of AIZ neuron activity. Calcium imaging on AFD thermosensory neurons revealed that feeding state did not influence the activity of AFD (Fig. 4C). We recently found that the interneuron-deficient calcineurin mutant tax-6(sensory+, inter−) also showed a defect in association between temperature and feeding state at 17°C (Fig. 3A; Kuhara and Mori 2006). TAX-6 calcineurin is required for temperature-starvation integrative behavior in both of two directly connected interneurons, AIZ and RIA, and like INS-1, TAX-6 is required for starvation-induced regulation of AIZ activity (Kuhara and Mori 2006). To investigate the genetic interaction between calcineurin-mediated signaling and insulin-like signaling in thermotaxis interneurons, we constructed daf-2; tax-6(sensory+, inter−) mutants. The food-associated thermotactic behavior defect of the tax-6(sensory+, inter−) mutation was not suppressed by the daf-2 mutation (Fig. 3A), which is consistent with the possibility that TAX-6 calcineurin acts downstream from DAF-2 in thermotaxis interneurons. We also investigated whether DAF-16 is involved in transcription of tax-6 by comparing the expression of a tax-6ϻGFP translational fusion gene in a wild-type background with expression of the fusion gene in a daf-16(mu86) deletion mutant background. We could not find any differences in expression level (data not shown). These results are consistent with the possibility that daf-16 is not required for the transcription of tax-6. A neuroendocrine system modulates the neural circuit important for integrative behavior Our results propose a model for food-associated thermotactic plasticity (Fig. 4D). During association between cultivation temperature and starvation, INS-1 is secreted from several neurons and antagonizes insulin-like signaling by inhibiting the activity of DAF-2 receptor. DAF-16 may be activated, probably through AGE-1, and a feedback loop from DAF-16 to INS-1 might exist. HEN-1 might also be secreted from AIY or ASE neurons (Ishihara et al. 2002). We thus suggest that a neuroendocrine system is important for modulating the neural circuit that underlies the integrative behavior. Murakami et al. (2005) argued that AGE-1 acts in AIY neurons to enhance isothermal tracking, which is one aspect of thermotaxis. Insulin-like signaling is required for salt chemotaxis learning, where animals pre-exposed to the chemoattractant NaCl under starvation condition exhibit reduced chemotactic response to NaCl (Tomioka et al. 2006). These reports also implicate the importance of insulin-like signaling in behavioral plasticity. What are the targets of DAF-16 in integrative behavior for temperature and food? Insulin-like signaling is a part of the dauer formation pathway, which has many feedback loops (Schackwitz et al. 1996). The results of the present study are consistent with a feedback loop in insulin-like signaling for this integrative behavior. Assuming the existence of a feedback loop, one clue for the targets of DAF-16 in the thermotactic plasticity might be found in a report by Murphy et al. (2003), which suggested that the insulin homologs INS-2, INS-7, INS-18, and INS-21 are likely to be direct or indirect targets of DAF-16. Likewise, it is plausible that any of these insulin-like molecules act agonistically on DAF-2 to activate the insulin-like pathway in food-associated thermotactic plasticity. We believe that these issues are critical for further study. Strains and genetics The standard techniques were used for culturing and handling C. elegans. For details and strains, see Supplemental Material. Behavioral assays A radial temperature-gradient assay was performed as described previously (Mori and Ohshima 1995;Mohri et al. 2005). The locomotory rate assay was performed according to a previous report (Sawin et al. 2000). The interaction assay was also performed as previously described (Ishihara et al. 2002). For details, see Supplemental Material. Molecular biology and germline transformation Standard methods for molecular biology and germline transformation were used. For details and vectors, see Supplemental Material. In vivo calcium imaging and data analysis In vivo calcium imaging was performed essentially according to Kimura et al. (2004) and Kuhara and Mori (2006). For details, see Supplemental Material. (B) ins-1(nr2091) mutants. n = 10-13. Relative increases or decreases in the intracellular calcium concentration were measured as increases or decreases, respectively, in the YFP/CFP fluorescence ratio of the cameleon protein (Ratio Change). Temperature (Temp.) is shown as a black line at the bottom. (C) Calcium imaging of the AFD thermosensory neuron in wild-type animals grown at 20°C expressing the cameleon protein cultivated under fed or starved conditions (n = 18-20). Temperature (Temp.) is shown as a black line at the bottom. (D) Model of the suggestive genetic pathway for modulation of integrative behavior between cultivation temperature and feeding state by insulin-like signaling based on the results presented in this study. DAF-7 is thought to be involved in memory acquisition of cultivation temperature (Murakami et al. 2005). short promoter; P. Swoboda and H. Sasakura for the osm-6 promoter; J. Sze for the tph-1 promoter; J. McGhee for the ges-1 promoter; H. Kagoshima for the ceh-14 promoter; all the members of Mori Laboratory for technical advice and stimulating discussions; A. Coulson and R. Shownkeen for cosmids; and the C. elegans Sequence Consortium for updating the C. elegans genome information. The Caenorhabditis Genetic Center provided some of the strains used in this study. This work was supported by Grant-in-Aid for Scientific Research on Priority Areas-Molecular Brain Science from the MEXT (00210010) and by HFSPO (to I.M.). I.M. is a Scholar of the Institute for Advanced Research of Nagoya University.
2018-04-03T04:39:29.687Z
2006-11-01T00:00:00.000
{ "year": 2006, "sha1": "d1328cca2562f0b4d2703fdaecfbc3a4e7af24cb", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/20/21/2955.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "45989f700f821a12c7572c86d0d37e1f90f49c86", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5613769
pes2o/s2orc
v3-fos-license
Identification of GPCR-Interacting Cytosolic Proteins Using HDL Particles and Mass Spectrometry-Based Proteomic Approach G protein-coupled receptors (GPCRs) have critical roles in various physiological and pathophysiological processes, and more than 40% of marketed drugs target GPCRs. Although the canonical downstream target of an agonist-activated GPCR is a G protein heterotrimer; there is a growing body of evidence suggesting that other signaling molecules interact, directly or indirectly, with GPCRs. However, due to the low abundance in the intact cell system and poor solubility of GPCRs, identification of these GPCR-interacting molecules remains challenging. Here, we establish a strategy to overcome these difficulties by using high-density lipoprotein (HDL) particles. We used the β2-adrenergic receptor (β2AR), a GPCR involved in regulating cardiovascular physiology, as a model system. We reconstituted purified β2AR in HDL particles, to mimic the plasma membrane environment, and used the reconstituted receptor as bait to pull-down binding partners from rat heart cytosol. A total of 293 proteins were identified in the full agonist-activated β2AR pull-down, 242 proteins in the inverse agonist-activated β2AR pull-down, and 210 proteins were commonly identified in both pull-downs. A small subset of the β2AR-interacting proteins isolated was confirmed by Western blot; three known β2AR-interacting proteins (Gsα, NHERF-2, and Grb2) and 3 newly identified known β2AR-interacting proteins (AMPKα, acetyl-CoA carboxylase, and UBC-13). Profiling of the identified proteins showed a clear bias toward intracellular signal transduction pathways, which is consistent with the role of β2AR as a cell signaling molecule. This study suggests that HDL particle-reconstituted GPCRs can provide an effective platform method for the identification of GPCR binding partners coupled with a mass spectrometry-based proteomic analysis. Introduction GPCRs are the largest family of membrane proteins in the human genome and perform vital signaling functions in vision, olfactory perception, and signal transduction processes in the metabolic, endocrine, neuromuscular and central nervous systems [1]. All GPCRs share a common seven-transmembrane (TM) ahelical structure with an extracellular N-terminus and an intracellular C-terminus. Agonists bind on the extracellular side of the receptors, which promotes conformational changes in the TM segments and associated intracellular regions. These conformational changes lead to the interaction and activation of heterotrimeric G proteins (a, b and c subunits) [1]. However, heterotrimeric G proteins are not the only proteins that bind to GPCRs, and growing evidence indicates a variety of other proteins may physically and functionally associate with GPCRs [2,3]. A large set of recent studies demonstrate that many other intracellular molecules interact with GPCRs to regulate G-protein-independent signaling, desensitization, internalization, and resensitization [2,3,4,5]. The majority of neuro-hormonal signals to the heart are mediated by GPCRs [6,7,8]. Occasionally these signals become pathogenic; for example chronically elevated sympathetic activity stimulating badrenergic receptors (bARs) is associated with heart failure progression and mortality [9,10]. Previous studies suggest that chronic stimulation of the b 1 AR plays a major role in the pathogenesis of dilated cardiomyopathy, while chronic stimulation of b 2 ARs is protective [10]. Due to the adverse effects of chronic activation of b 1 ARs, b-blockers have been widely used for heart failure management. Although bARs have been among the most extensively studied member of GPCRs in the heart, little is known about the signaling pathways that mediate the pathologic response to chronic b 1 AR stimulation or the protective effects of b 2 AR stimulation. Therefore, identification of bAR-mediated signaling pathways will provide better understanding for the development of therapeutic targets for heart failure. To date, yeast-2-hybrid overlay technologies or pull-down assay followed by mass spectrometry-based protein identification have been used to identify protein-protein interaction [11,12]. Mass spectrometry has become the method of choice for the identification, quantification, and detailed primary structural analysis of protein components in complex mixtures [11,13,14,15]. However, the application of mass spectrometry in the identification of GPCR-interacting proteins has been limited due to the challenges of working with membrane proteins and the low abundance of GPCRs in native tissue [16,17]. To address these challenges, we developed a new approach for identifying interacting proteins by preparing GPCRs within highdensity lipoprotein (HDL) particles, where GPCRs are in a more membrane-like environment when compared to a detergent micelle. An HDL particle is composed of a dimer of apolipoprotein A-I (ApoA-I) surrounding a planar bilayer of about 160 phospholipids in which GPCRs are easily reconstituted in vitro [reconstituted HDL (rHDL)] [18]. Electron microscopy images of these particles showed the uniform disk-shaped structure (10-12 nm in diameter and thickness of 40 Å , the same thickness of a plasma membrane) [18]. Previous studies demonstrate that HDL particle-reconstituted b 2 AR (b 2 ARNrHDL) is monomeric and fully functional by virtue of its capacity to support both high-affinity agonist binding and rapid agonist-mediated nucleotide exchange of G proteins [18]. In this study, we used the b 2 ARNrHDL as bait for the identification of b 2 AR-interacting proteins in heart cytosol to gain insights into the b 2 AR-mediated signaling pathways in the heart. b 2 AR-interacting proteins present in the adult rat heart cytosol were identified using b 2 ARNrHDLs and bioinformatic analysis. The identified molecules suggest some novel b 2 AR signaling pathways in the heart, which could provide insight into the noncanonical roles played by the b 2 AR in the heart. Ethics Statement The use of animals for the experiments followed Stanford University guidelines and all experiments involving animals were approved by the Stanford University Administrative Panel on Laboratory Animal Care. Materials All materials were purchased from Sigma Aldrich (St. Louis, MO) unless otherwise indicated. Sf9 insect cells, insect cell culture media and transfection reagents were obtained from expression systems (Woodland, CA). Dodecylmaltoside was from Affymetrix (Santa Clara, CA). Palmitoyl-oleoyl-glycero-phosphocholine and palmitoyl-oleoyl-phosphatidylglycerol were from Avanti Polar Lipids (Alabaster, AL). Complete protease inhibitor cocktail was from Roche (Indianapolis, IN). Ni-NTA resin was made by using Chelating sepharose fast flow (GE Healthcare Biosciences, Pittsburgh, PA) according to the manufacture's instruction. Heart Cytosol Preparation Adult Sprague Dawley rat hearts were homogenized in buffer A (25 mM HEPES, 140 mM KCl, 12 mM NaCl, 0.8 mM MgSO 4 , 1 mM EDTA, pH 7.4) containing complete protease inhibitor cocktail. Crude homogenate was centrifuged for 10 min at 1000 g, and the supernatant was centrifuged again for 30 min at 18,000 g at 4uC. The supernatant was collected as cytosol, and the protein concentration was adjusted to 10 mg/ml. b 2 AR and ApoAI Preparation b 2 AR was prepared as previously described. Briefly, Nterminally Flag-tagged b 2 AR was expressed in Sf9 insect cells using recombinant baculovirus [19]. Sf9 cell membranes were solubilized in dodecylmaltoside, and the b 2 AR was purified by sequential Flag-specific M1 antibody and ligand affinity chromatography. Wild-type His-tagged human ApoA-I was expressed and purified from E. coli as previously described [18]. The purity of purified b 2 AR and ApoAI was tested by SDS-PAGE and coomassie staining ( Figure S1). Figure S1 shows that the purified samples do not have proteins other than b 2 AR or ApoAI. HDL Particle Formation b 2 AR was reconstituted into rHDL as previously described [18]. Briefly, a mixture of palmitoyl-oleoyl-glycero-phosphocholine and palmitoyl-oleoyl-phosphatidylglycerol were used in combination (3:2 molar ratio) to mimic the zwitterionic environment of a cell membrane. Lipids were solubilized with HNE buffer (20 mM HEPES, 100 mM NaCl, 1 mM EDTA, pH7.5) plus 50 mM Cholate. An rHDL reconstitution consisted of the following with final volume of 1.3 ml: 24 mM cholate, 8 mM lipid, and 100 mM apoA-I in HNE buffer. For receptor reconstitution in rHDL particle, 2 mM of b 2 AR was added. After incubation for 2 hrs on ice, samples were subjected to BioBeads (BioRad, Hercules, CA) to remove detergents, resulting in the formation of rHDL. b 2 ARNrHDL were subsequently purified from receptor-free empty rHDL and immobilized by 100 ml M1-anti-FLAG immunoaffinity resin. Empty rHDL for the negative control was prepared by same procedure but without b 2 AR. Pull-down b 2 AR-interacting Proteins from Heart Cytosol b 2 ARNrHDL (106 mg of b 2 AR) was immobilized to Flag-specific M1 resin (100 ml) by mixing b 2 ARNrHDL and M1 resins for 2 hrs at room temperature as described above. Three ml of prepared heart cytosol (10 mg/ml) with 4 mM CaCl 2 was added and incubated overnight at 4uC. The supernatant were removed and resins were washed 7 times with ice cold 1 ml buffer B (25 mM HEPES, 140 mM KCl, 12 mM NaCl, 0.8 mM MgSO 4 , 2 mM CaCl 2 , pH7.4). b 2 ARNrHDL and b 2 ARNrHDL-interacting proteins were eluted by 200 ml elution buffer (20 mM HEPES, 100 mM NaCl, 0.2 mg/ml FLAG peptides and 8 mM EDTA, pH 7.5). With this elution condition, b 2 ARNrHDL and b 2 ARNrHDLinteracting proteins are effectively eluted, but M1 antibody remains on the beads. The eluted samples were concentrated using speedvac by reducing the volume down to 50 ml. Negative Control Experiments We used two different negative controls; M1 resin control and empty rHDL control. M1 resin control was used to identify proteins that bind nonspecifically to M1 resin. For the M1 resin control, empty M1 resin (100 ml) was incubated with heart cytosol (3 ml of 10 mg/ml) overnight at 4uC followed by the procedure as described above. M1 resin control and b 2 ARNrHDL pull-down samples were run on the SDS-PAGE gel and stained with GelCode Blue Stain Reagent (Pierce, Rockford, IL) ( Figure 1B). The first lane of Figure 1B indicates M1 resin control. Empty rHDL control was used to identify proteins that bind to either ApoAI or to lipids in the rHDL. Empty rHDL was prepared as described above without adding b 2 AR and immobilized on 100 ml Ni-NTA resin by using the His-tag on ApoAI. Heart cytosol (3 ml of 10 mg/ml) was incubated with empty rHDLimmobilized Ni-NTA resin or Ni-NTA resin overnight at 4uC. Ni-NTA resin was used to discriminate the proteins that nonspecifically bind to the Ni-NTA resin. The resins are washed extensively with buffer B, and bound proteins were eluted with buffer B containing 200 mM imidazole. Empty rHDL-immobilized Ni-NTA resin control and Ni-NTA resin samples were run on the SDS-PAGE gel and stained with GelCode Blue Stain Reagent ( Figure 1C). The first lane of Figure 1C indicates empty Ni-NTA resin samples, and the second lane indicates Empty rHDLimmobilized Ni-NTA resin control. MS and Identification Forty five ml of eluted samples were loaded onto 10% polyacrylamide gel and stained with GelCode Blue Stain Reagent. Gel lanes ( Figure 1B and 1C) were cut and submitted to the Vincent Coates Foundation Mass Spectrometry Laboratory at Stanford University for in-gel tryptic digestion and protein identification by mass spectrometry. Scaffold 3 (Proteome Software Inc., Portland, OR) was used to validate MS/MS-based peptide and protein identifications. Peptides were identified from MS/MS spectra by searching for the IPI Rattus norvegicus database using the Mascot search algorithm (www. matrixscience.com). The following parameters were used: trypsin specificity, cysteine carbamidomethylation as a fixed modification. Protein identifications were accepted if they could be established at .95.0% probability and contained at least two unique identified peptides. Protein probabilities were assigned by the Protein Prophet algorithm. Using these stringent identification parameters, peptide false detect rate was 0.2%, and protein false detect rate was 0.1%. Data Anlaysis Bioinformatics analysis of molecule function classification and canonical pathway analysis was performed using of Ingenuity Pathways Analysis (IngeunityH Systems, www.ingenuity.com). The Functional Analysis identified the biological functions that were most significant to the data set. Right-tailed Fisher's exact test was used to calculate a p-value determining the probability that each biological function assigned to that data set is due to chance alone. Canonical pathways analysis identified the pathways from the Ingenuity Pathways Analysis library of canonical pathways that were most significant to the data set. The significance of the association between the data set and the canonical pathway was measured in 2 ways: 1) A ratio of the number of molecules from the data set that map to the pathway divided by the total number of molecules that map to the canonical pathway is displayed. 2) Fisher's exact test was used to calculate a p-value determining the probability that the association between the genes in the dataset and the canonical pathway is explained by chance alone. Western Blot Five ml of eluted samples or 1 ml of cell lysates were separated by 10% SDS-PAGE, and transferred to a PVDF membrane. Blots were blocked with 5% nonfat dry milk for 1hr at room temperature, and then incubated with a primary antibody for 2 hrs at room temperature, followed by incubation with a IR dyelabeled secondary antibody (Rockland Immunochemicals, Gilbertsville, PA) for 1hr at room temperature. The signal was visualized with Odyssey imaging systems (LI-COR biosciences, Lincoln, NE). Pull-down of b 2 AR-interacting Proteins To isolate proteins that interact with the b 2 AR in the heart, we reconstituted purified b 2 AR in rHDL and immobilized it on Flagspecific M1 resin. Immunoprecipitation by M1 antibody is beneficial because proteins can be eluted without disrupting the interaction between M1 antibody and the resin, so there is no M1 IgG protein in the eluted sample. Please note that there is no IgG band at 50 kDa or 25 kDa in the first lane of Figure 1B. The bands at 50 kDa and 25 kDa in the second and third lanes of Figure 1B are b2AR and ApoAI respectively. b 2 ARNrHDL immobilized on M1 resin was occupied by either 50 mM of the full agonist BI-167107 (BI) or 50 mM of the inverse agonist Carazolol (Cz), and then incubated with adult rat heart cytosol ( Figure 1A). b 2 ARNrHDL and interacting proteins were eluted, separated on SDS-PAGE, and stained with GelCode Blue Stain Reagent ( Figure 1B). To exclude proteins non-specifically bound to the M1 resin, heart cytosol was incubated with M1 resin alone (lane 1, Figure 1B) (See ''Materials and Methods'' for details). To exclude proteins bound to rHDL, empty rHDL-immobilized Ni-NTA resin was incubated with heart cytosol (lane 2, Figure 1C). The proteins that were nonspecifically bound to Ni-NTA resin were eliminated by including Ni-NTA resin control (lane 1, Figure 1C). Therefore, BI-occupied b 2 AR-interacting proteins were defined as proteins identified in [((lane 2 of Figure 1B) -(lane 1 of Figure 1B)) -(lane2 of Figure 1C) -(lane 1 of Figure 1C))]. Similarly, Cz-occupied b 2 AR-interacting proteins were defined as proteins identified in [((lane 3 of Figure 1B) -(lane 1 of Figure 1B)) -(lane2 of Figure 1C) -(lane 1 of Figure 1C))]. Gel pieces were cut out with no gaps from each lane (1A through 3H of Figure 1B and 1A through 2F of Figure 1C) and subjected to in-gel trypsin digestion. The tryptic digests were analyzed through Thermo LTQ-Orbitrap Velos ETD LC-MS. Identification of b 2 AR-interacting Proteins by MS A total of 521 proteins were identified from the gel pieces shown in Figure 1B, and 265 proteins were identified from the gel pieces shown in Figure 1C (Table S1). After subtracting proteins that were found in the control experiments as described above, 327 proteins were identified specifically in b 2 ARNrHDL pull-down samples (Table S2). The majority of proteins (210 proteins) were found in both the BI-occupied and the Cz-occupied samples. Eighty-three proteins were detected only the in the BI-occupied sample, and 32 proteins were specific for the Cz-occupied sample (Table S2). Protein false detect rate was 0.1% (See Materials and Methods). The majority of proteins were detected at the expected molecular weight range (Table S2, and Table 1 for selected proteins). Five of the identified proteins are known to interact with b 2 AR based on protein-protein interaction databases (BioGrid, MINT, IntAct, HPRD and MIPS) ( Table S2, Table S3 and Table 1). The interaction of subset of newly identified-proteins with the b 2 AR was further confirmed by Western Blotting (See below). Validation of the Identified Proteins by Western Blotting To validate the MS analysis results, we performed Western Blot analysis on select proteins (Table 1 and Figure 2). Both known b 2 AR-interacting proteins (NHERF-2, Grb2 and Gsa) and novel b 2 AR-interacting proteins (AMPKc, AMPKa, ACC and Ubc13) were selected for validation by Western Blotting (Figure 2). Interestingly, both in the MS analysis (Table S2 and Table 1) and on the Western Blot (Figure 2), Gsa was only identified in the agonist-occupied pull-down sample. In contrast, Grb2 and Ubc13 were found both in agonist and inverse agonist occupied pulldowns by Western Blotting (Figure 2) but only in inverse agonistoccupied samples in the proteomic analysis (Table S2 and Table 1). These proteins were not detected in M1 resin control (lane 1, Figure 2), empty rHDL-immobilized Ni-NTA or empty Ni-NTA negative controls (Data not shown). Bioinformatics Analysis Proteins from b 2 ARNrHDL pull-downs and control pull-downs were assigned broad classification based upon known or predicted functions by using Ingenuity Pathway Analysis software. Proteins with unknown functions were omitted from this classification, and proteins with multiple functions were assigned to both groups. Proteins were grouped into 25 functional groups; Cell signaling, Cell-to-cell signaling/interaction, Energy production, Nucleic acid metabolism, Lipid metabolism, Carbohydrate metabolism, Amino acid metabolism, Small molecule biochemistry, Molecular transport, Protein trafficking, DNA replication/recombination/repair, Gene expression, Cellular function/maintenance, Cellular com-promise, Cell cycle, Cellular assembly/organization, Cell morphology, Cellular movement, Cell death/survival, Cellular development, Cellular growth/proliferation, Post-translational modification, Protein folding, Protein synthesis, Protein degradation ( Figure 3A and Table 2). Interestingly, molecules involved in cell signaling, molecular transport, protein trafficking, and protein degradation are predominant in the b 2 ARNrHDL pull-downs compared to the control sample. Identified proteins were also analyzed against canonical pathways by using Ingenuity Pathway Analysis software. The top 15 canonical pathways using the data set of the b 2 ARNrHDL pulldowns or the control sample are presented in Figure 3B. As expected from the known cellular role of b 2 AR, b 2 ARNrHDL pulldowns were biased toward signal transduction pathways ( Figure 3B left), whereas controls samples were biased toward metabolic pathways ( Figure 3B right). Discussion In the present study, we identified b 2 AR-interacting proteins in rat heart cytosol by using the full-length b 2 ARs reconstituted in the plasma membrane-mimicking HDL particles. To our knowledge, this is the first comprehensive study to investigate GPCRinteracting proteins in the heart or any other primary tissues other than brain. The advantage of reconstituting GPCRs in the HDL particles is that GPCRs are more stable and in a more physiological conformation than detergent-solubilized GPCRs. Furthermore, this approach overcomes the low endogenous expression of the receptor in the heart by using large amount of b 2 ARNrHDL as bait. Various methods have been used to screen for direct and indirect binding partners of GPCRs. Among those, the affinity isolation/mass spectrometry-based proteomic approach allows the capture and analysis of larger proteome units of protein complexes and can be used for isolating and purifying complexes from cellular and tissue preparations [11,20]. However, the proteomic analysis of GPCRs has been challenging due to low endogenous expression levels and hydrophobicity of GPCRs. To date, identification of interacting proteins in native tissue has been successful for few GPCRs; including, mGluR5, 5-HT receptors (5-HT-2a, 5HT-2c, and 5-HT4a), and a 2 B-AR [21,22,23,24,25]. All of these GPCRs were studied in brain tissue where GPCRs and their binding partners are highly expressed, enabling isolation of sufficient quantities of the receptor and associated proteins. Furthermore, studies with 5-HT receptors and a 2 B-AR used cterminal peptides of the receptors (not the full-length GPCR) as baits [21,23,24,26,27]. Therefore, those studies cannot identify binding partners that interact with GPCR domains outside of the c-terminus. The present study successfully used full-length b 2 AR as bait and identified binding partners from heart tissue, where the expression level of endogenous b 2 AR is very low. Cell Signaling ALOX15, NRAS, RAB3A, CFL1, RRAD, ATP2A1, RAB7A, ATP2A3, LOC643751, NOS3, RAP1A, KPNB1, TGM2, GNB1, PPP2CB, HP, RRAS2, CADPS, GNAO1, EEF1A1, DNAJA3, YWHAQ (includes EG:22630), IPO5, NME2, RHEB, (STAT4, GNAI2, GNAS, RHOG, PRKAA1, LOC643751, YWHAE, STAT1) CUL5, QKI, APOA1, PPP2CA, CUL4A, CUL1, ATP2A1, BAG3, SH3GLB1, LIPE, DDX3X, INPPL1, NOS3, LOC643751, ROCK2, CIAPIN1, GNB1, TGM2, DYNLL1, HK2, PRKAA2, DNAJA3, SLC9A3R2, PPM1A, TXN, GPX4, RHEB, PLA2G16, ALOX15, NRAS, CD36, SIRT3, PFKM, PPP2CB, PLA2G6, FIS1, PPP2R1A, TUBA1A, CAPNS1, RRAS2, SIRT2, CAPN1, GNAO1, EEF1A1, TPP1, CLIC4, MAP4, UBA1 HSPB1, AKR1B1, ALDH2, MAP4, NME2, PRDX5, SELENBP1, UBA1 Although the present study identified b 2 AR-interacting proteins from heart cytosol, there are limitations. First, b 2 AR-interacting membrane proteins cannot be purified because b 2 AR is trapped in the rHDL and detergents cannot be used to solubilize the membrane proteins. GPCRs interact with membrane proteins as well as cytosolic proteins. b 2 AR is also known to interact various membrane proteins (Table S3), but we could not purify these proteins due to the limitations of the system. Second, the results of the present study do not represent proteins that bind to b 2 AR with post-translational modifications (PTMs). GPCRs are known to undergo various PTMs including phosphorylation, ubiquitination, glycosylation and nitrosylation [28,29,30]. However, b 2 AR purified from insect cells does not contain the same PTMs vs. b 2 AR from mammalian cells. Therefore, proteins that are known to interact with phosphorylated b 2 AR (e.g. b-arrestins) [31] were not identified in this study. Lastly, as expected, not all previously known b 2 AR interacting proteins were identified in our search (Table S3). This may be due to the intrinsic limitation of mass spectrometry-based protein identification (false-negative detection), low binding affinity of those proteins to b 2 AR, or the artificial environment of b 2 ARNrHDL. Additional studies are required to overcome these limitations; however, we believe that the described method represents an improvement on previously described methods for identifying GPCR-interacting proteins. Bioinformatic analyses of b 2 ARNrHDL pull-downs showed distinct protein profiles compared to control pull-downs ( Figure 3). Functional analysis indicated that a higher percentage of proteins from b 2 ARNrHDL pull-downs are involved in cell signaling and protein trafficking when compared with controls ( Figure 3A), suggesting that the identified proteins are not the result of non-specific binding. Canonical pathway analysis used the list of identified proteins to predict relevant signaling pathways and confirmed the difference between b 2 ARNrHDL pull-downs and control pull-down. The majority of pathways from b 2 ARNrHDL pull-downs are signal-transduction related pathways; whereas, most of the top 15 pathways from control pull-downs are related to metabolic proteins that are enriched in the heart ( Figure 3B). In addition to the known b 2 AR signaling pathways in the heart (eg. cardiac b-adrenergic signaling, protein ubiquitination, clathrin-mediated endocytosis and G beta gamma signaling), the present study suggests the involvement of the b 2 AR in novel signaling pathways; such as AMPK signaling, PI3K/AKT signaling and integrin signaling pathways ( Figure 3). b 2 AR interaction with selected proteins identified in the b 2 ARNrHDL pull-downs were confirmed by co-immunoprecipitation and Western Blot analysis ( Figure 2) indicating that the identified proteins are not falsepositives. The role of these novel signaling pathways in b 2 AR function in heart physiology and pathology warrants further investigation. Taken together, these bioinformatic analyses confirm the utility of using of GPCRNrHDL as an experimental system to identify GPCR-interacting proteins.
2016-05-04T20:20:58.661Z
2013-01-25T00:00:00.000
{ "year": 2013, "sha1": "b663841c5fe28b0e2c76f0d0c965fe967e44d62b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0054942&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b663841c5fe28b0e2c76f0d0c965fe967e44d62b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252725450
pes2o/s2orc
v3-fos-license
Workplace Conflict On Productivity and Emotional Stability of Employee Some recent observers noticed that employee productivity nearly on all kind of organizations is diminishing on a day-to-day basis, particularly in the field of education. The researcher found a number of factors that affected the productivity of employees. One of the major factor contributing to the productivity of employees is workplace conflicts. Therefore, the main goal of this study was to examine the workplace conflicts and its effect on the employee productivity and emotional stability. This research also examined a mediating role of workplace politics between the workplace conflicts and employee productivity among the members of the faculty of education Adekunle Ajasin university, Akungba-Akoko, Ondo State, Nigeria. Stratified sampling technique was used to collect the data from concerned population by using structured questionnaire. To achieve the goals of study different statistical techniques were used by using SPSS. Reliability test was used to check the data reliability. Furthermore, t-test analysis was used to investigate the relationship between workplace conflicts and employee productivity. The finding of this test showed that the relationship between workplace conflicts and employee productivity is negative. Macro process tool was used to investigate that whether workplace politics mediates the relationship between workplace conflicts and employee productivity. The finding of this test showed that workplace conflicts significantly predict emotional stability of employee and employee productivity. Introduction Conflict is inevitable among humans be it at home, church or in organization; especially when there is an interaction between two or more individuals, groups or organizations; this is largely caused by differences in individual perception, goal, interest, ideas, feelings, values, that conflict with each other. According to Shetach (2012), Conflict is part of social and business life hence, it is found everywhere. Conflict can be seen as a reality of social life that exists at all levels of society; it can be said to be as old as man. Due to immense social interaction that takes place in an organization, conflict is unavoidable however; its management determines whether the result will be positive or negative. Scholars asserted that many types of relationships such as families, churches, marriages, nations, ethnic groups, and organizations experience conflicts (Deutsch, Coleman & Marcus 2006;Afful-Broni, 2012). When conflict is mentioned people tend to perceive it as negative and hardly look at it from a positive angle. We have functional and dysfunctional conflicts; those conflicts that compel us to be creative problem resolvers to maintain a healthy workplace, and come up with a structure that will enable the organization benefit from diversified employees while creating opportunity for redevelopment and acquiring of new skills can be classified as functional conflicts. While those conflicts that negatively affect employees both psychologically and emotionally, that also leads to low productivity can be classified as dysfunctional conflicts. It is imperative to note that how conflicts are managed will determine its outcome either functional or dysfunctional. While conflict is generally perceived as dysfunctional, it can also be functional; conflict has both positive and negative effects. It can be positive when it enhances creativity, clarification of points of view, and the development of human capabilities to handle interpersonal divergences. There is no anomaly in organizational conflict because it produces or presents an opportunity for modification and settlement between the aggrieved parties for the well-being of both the employees and the organization (Osad & Osas, 2013). Conflict can be negative when it creates resistance to change, establishes uproar, interpersonal relations distrust, low productivity, organizational ineffectiveness (Hotepo, Asokere, Abdul-Azeez, & Ajemunigbohun, (2010). Tabitha and Florence (2019) referred to individual conflict as "man against self" conflict, in which such individual state of mind is largely dictated by circumstances within or around him or her. Such as anger, addiction, depression, frustration, confusion, this could result in aggression. It could be a conflict of values, of priorities in which man continues to battle or contend with his mind and habits leading to difficulties in deciding on a goal. It posits that interpersonal conflict is a conflict that occurs between two or more individuals working together in groups or crew. From the organizational view, this can also be referred to as worker to worker or lateral conflict because it occurs mostly among employees on the same hierarchy; this conflict is part of life and it is present in every organization (Cloke & Goldsmith, 2011). Nistorescu (2019), also views this kind of conflict as a means through which an individual or a sector prevents another from achieving the desired goal; he states that if not checkmated early could lead to dangerous situations in future that will affect organizational effectiveness. Sometimes conflict could be covert, not all conflict ends in physical exchange of blows or use of weapons. Conflict could be inter-group, this could occur due to differences between two or more groups such as departments or workgroups in an organization, communities, and ethnic groups. Pandy (2019) asserts that this kind of conflict may occur from lack of mutual agreement, differences in group goals, limited resource, poor communication channel, overlapping responsibilities, struggle for recognition, etc., hence, management of conflict by managers will determine if it will lead to functional or dysfunctional outcome. Managers in an organization should be able to identify types of conflict that will enable them to apply an appropriate strategy that will create positive results. Organizational conflict crops up when there is disagreement on how a job or task should be executed; this could be disagreement between individual, inter-personal or intergroup; how this conflict is managed will determine its outcome. Effective conflict management enhances organizational development through employee dedication, enthusiasm, absorption it also boosts morale, and stimulates individuals which will in turn lead to organizational productivity. Organizational effectiveness is one of the measures of performance that is used to assess how outputs interact with the economic and social environment. Zheng, Yang and McLean (2010), opined that productivity generally determines the policy objectives of the organization or the extent to which organizational goals are realized. Productivity is sometimes used to replicate all inclusive performance of an organization because it is broader compared to other concepts of organizational performance. Ability to execute a function with optimal levels of input and output determines the productivity of any organization (Amin & Shila, 2015). Business environment is highly dynamic and ever changing due to globalization, any organization that wants to be relevant and gain competitive advantage must enhance their organizational productivity. Education is the second field of work in which conflicts mostly occur after the government field (Bakker, Albrecht & Leiter, 2011). When two or more people, countries, nations and groups disagree with some topics then conflicts may occur because of the difference in ideas, behaviours, perceptions, interests, attitudes, politics (Afzal, Khan and Ali, 2009). Therefore, as per (Moily, 2008) conflict occurs because of tensions between individuals and society, poor governance, historical background, and socioeconomic condition. It was also stated by (Agwu, 2013) that difference in personality backgrounds, functional interdependence, autonomy, and status arise conflicts. When a person avoids the other person's achievements, conflict also arises (Hotepo, Sokere, Azeez, and Ajeminigbohun, 2010). (Hussain and Mujtaba, 2012) stated that conflicts between employees in workplace create stress for the employees and become the reason of low performance for the organization. The nature of the workplace conflicts effect on employee productivity and consequently affect the competitiveness of the organization. Employee productivity and morality are affected by the performance and reward management effectiveness of an organization (Yazici, 2008). Firm has a competitive advantage when a company retains high levels of performance compared to its competitors (Harmon, 2014). Job satisfaction, organizational commitment, remunerations and rewards are the factors affecting productivity (Khan, Farooq and Ullah, 2010). Employee motivation is the most influential factor of productivity (Kulchmanov and Kaliannan, 2014). (Raza, 2012), highlighted the previous statements which is to find a way to improve productivity so employees can perform better and vice versa. Improving employee skills and providing trainings are other factors that maximize the productivity (Nielsen, 2013). In addition, (Euske and Lebans, 2006) defined that the productivity of employees is not only financially dependent, but also non-financially dependent, which ensure a level of organizational objectives and goals. Productivity is an element of capacity and inspiration, where capacity contains the skills, an asset required to carry out assignments. While an inspiration is depicted as an internal power that drives individuals to act towards something. It is also said that employees are more susceptible to turnovers if they are unsatisfied and therefore losing motivation to show good performance. A happy and satisfied worker has higher productivity and it is easier for the management to motivate high performers to reach firm goals (Kinicki and Kreitner, 2007). Many researchers defined workplace politics as actions that impact activities, behaviours and, decision-making by using power. Bouckenooghe, Zafar and Raja (2015) implied workplace politics in a workplace as the pre-mediated use of individuals' power to fulfil a person's interests and goals at their workplace. There are two streams that elaborate workplace politics studies (Ferris, 2002). The first is older tactics and behaviors of politics influence (Bodla and Danish, 2013) and the second is the perceptions of employees in their working environment of workplace politics. The growing phenomena and the most debatable topic of these days are workplace politics. Over the past few decades, the studies on workplace politics have increased (Gull and Zaidi, 2012). Conflict is a part of organizational life and may occur between individuals, between individuals and groups, as well as between groups that may come out as a result of workplace politics (Lammer, 2009). Eze (2011 argued that workplace politics is a self-serving behaviour that looks for the expense of others to achieve self-esteem, advantages and benefits. Workplace politics in tertiary organizations often seeks to secure or maximize individual interests or, on the other hand, keeping away from negative results inside the organization (Ferris and Kacmar, 2011). Employees in tertiary organizations are always engaged in politics to pursue individual goals. While workplace politics is a correlation of employee conflict, Udoye (2011) saw conflicts as dysfunctional, that could be important as this might make an issue to be displayed in various point of view. Politics include the human component and the resulting relationship is political which has to be overseen and managed carefully, maturity and sincerity before it increases uncontrollably (Krietner and Kinicki, 2004). Statement of the problem Conflict has been viewed as evil, but constructive conflict management is a high point for any organization. Hence, conflict management is the means of reducing the dysfunctional aspect of this phenomenon while increasing the functional aspect of it. Workplace conflict is to create a very good conducive workplace atmosphere free of resentment, incivility, violence, which could lead to physical, psychological or financial damages to both employees and the organization. Effective work place conflict becomes an essential tool to encourage employee engagement and to maintain Competitive advantage. Conflict is frequently seen as dysfunctional but it has been established that it is not every conflict that results in negative effects on organizations but some have positive effects on team participation. Research Hypotheses 1. There is no significant relationship between workplace conflicts and employee productivity. 2. There is no significant relationship between workplace conflicts and employee emotional stability. 3. Workplace conflicts do not significantly predict employee productivity and emotional stability. Methodology This study adopted a descriptive survey research design. It is a form of descriptive design that uses a representative sample to collect data for systematic description of existing situation or phenomenon. The population consisted of all employees in tertiary institution in Ondo State. A simple random sampling technique was used to choose the sample for the study. The sample of the study consisted of 200 employee across Ondo State. The instrument for data collection was a self-constructed questionnaire titled Workplace conflicts and employee productivity an emotional stability (WCEPES). 200 copies of the questionnaire were distributed and 200 were returned. The instrument was divided into sections; sections A contain personal data of the respondents. While section B contains the item to answer the question raised. The face and content validity of the instrument was ascertained by the researcher. Test-retest technique was used by the researcher; the questionnaire was distributed to some sample of twenty youth in Ekiti State. After two week interval the same instrument was readministered to the same set of employee. Pearson Product Moment Correlation (PPMC) was used to determine the correlation coefficient which was 0.82 which certified that the questionnaire was reliable. Data were analyzed using frequency count, percentage and t-test. Results and Discussion International Journal of Management and Business Applied H 0 : There is no significant relationship between workplace conflicts and employee productivity. Table 1 shows that r calculated (0.168) lesser that r tabulated (0.195) at 0.05 level of significance. The null hypothesis is accepted. This implies that there is significant relationship between workplace conflicts and employee productivity. Hypothesis Two H 0 : There is no significant relationship between workplace conflicts and employee emotional stability. Table 2 shows that r calculated (0.382) greater that r tabulated (0.195) at 0.05 level of significance. The null hypothesis is rejected. This implies that there is significant relationship between workplace conflicts and employee emotional stability. Hypothesis Three H 0 : Workplace conflicts do not significantly predict employee productivity and emotional stability. Table 3 shows that r calculated (0.432) greater that r tabulated (0.195) at 0.05 level of significance. The null hypothesis is rejected. This implies that Workplace conflicts do not significantly predict employee productivity and emotional stability. Conclusion Conflict is part of daily social life that is inevitable; therefore it is imperative that the manager identifies the nature and significance of conflicts in an organization as well as recognizing levels or types of conflict. Therefore, when conflict is properly managed, it enhances learning, creates the spirit of teamwork and cooperation; which is capable of increasing organizational innovation due to diversity of workforce, thereby leading to effectiveness or performance in an organizational setting. Recommendations The following recommendations were drawn from the findings; 1. The employer should try as must as possible to reduce the workplace conflicts to a barest minimum and produce an enabling environment for employee productivity.
2022-10-06T15:07:49.027Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "f4cdd32031b3b3d0dc879b71a42633ba1c97854a", "oa_license": "CCBYNC", "oa_url": "https://journal.adpebi.com/index.php/IJMBA/article/download/216/389", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c1f09d3f30507d5bf555c24599be955e35b26427", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
243256207
pes2o/s2orc
v3-fos-license
Fuzzy Differential Subordinations Obtained Using a Hypergeometric Integral Operator This paper is related to notions adapted from fuzzy set theory to the field of complex analysis, namely fuzzy differential subordinations. Using the ideas specific to geometric function theory from the field of complex analysis, fuzzy differential subordination results are obtained using a new integral operator introduced in this paper using the well-known confluent hypergeometric function, also known as the Kummer hypergeometric function. The new hypergeometric integral operator is defined by choosing particular parameters, having as inspiration the operator studied by Miller, Mocanu and Reade in 1978. Theorems are stated and proved, which give corollary conditions such that the newly-defined integral operator is starlike, convex and close-to-convex, respectively. The example given at the end of the paper proves the applicability of the obtained results. Introduction The introduction of the fuzzy set concept by Lotfi A. Zadeh, in the paper "Fuzzy Sets" [1] in 1965, did not suggest the extraordinary evolution of the concept which followed. Received with distrust at first, the concept is very popular nowadays, being adapted to many research topics. Mathematicians were also interested in embedding the concept of fuzzy set in their research and it was indeed included in many mathematical approaches. The review paper included in the present special issue, devoted to the celebration of the 100th anniversary of Zadeh's birth [2], shows how fuzzy set theory has evolved related to certain branches of science, and points out the contribution of one of Zadeh's disciples, Professor I. Dzitac, to the development of soft computing methods connected with fuzzy set theory. Professor I. Dzitac has celebrated his friendship with the multidisciplinary scientist, Lotfi A. Zadeh, by writing the introductory paper of a special issue on fuzzy logic dedicated to the centenary of Zadeh's birth [3]. As far as complex analysis is concerned, fuzzy set theory has been included in studies related to geometric function theory in 2011, when the first paper appeared introducing the notion of subordination in fuzzy set theory [4] which has had its inspiration in the classical aspects of subordination introduced by Miller and Mocanu [5,6]. The next papers published followed the line of research set by Miller and Mocanu and referred to fuzzy differential subordination, adapting notions from the already well-established theory of differential subordination [7][8][9]. The idea was soon picked up by researchers in geometric function theory and all the classical lines of research in this topic were adapted to the new fuzzy aspects. A review paper published in 2017 [10] included in its references the first published papers related to this topic, validating its development. The dual notion of fuzzy differential superordination was also introduced in 2017 [11]. An important topic in geometric function theory is conducting studies which involve operators. Such studies for obtaining new fuzzy subordination results were published soon after the notion was introduced, in 2013 [12], continued during the next years [13][14][15][16] and later added the superordination results [17][18][19]. During the last years, many papers were published which show that the research on this topic is in continuous development process and we mention only a few here [20][21][22][23][24]. Following this line of research, a new hypergeometric integral operator is introduced in this paper using a confluent (or Kummer) hypergeometric function and having, as inspiration, the operator studied by Miller, Mocanu and Reade in 1978, by taking specific values for parameters involved in its definition. Fuzzy differential subordinations are obtained and the fuzzy best dominants are given, which facilitate obtaining sufficient conditions for univalence of this operator. Preliminaries The research presented in this paper is done in the general environment known in the theory of differential subordination given in the monograph [25] combined with fuzzy set notions introduced in [4,7]. The unit disc of the complex plane is denoted by U. H(U) stands for the class of holomorphic functions in U. Consider the subclass, For a ∈ C, n ∈ N * the following subclass of holomorphic functions is obtained: > α} denote the class of starlike functions of order α. For α = 0, the class of starlike functions is denoted by S * . For α < 1, let K(α) = { f ∈ A :Re z f (z) f (z) + 1 > α} denote the class of convex functions of order α. For α = 0, the class of convex functions is denoted by K. The subclass of close-to-convex functions is defined as: It is also said that function f is close-to-convex with respect to function ϕ. Definition 1 ([4]). Let D ⊂ C and z 0 ∈ D be a fixed point. We take the functions f , g ∈ H(D). The function f is said to be fuzzy subordinate to g and write f ≺ F g or f (z) ≺ F g(z), if there exists a function F : C → [0, 1], such that (i) f (z 0 ) = g(z 0 ), (ii) F( f (z)) ≤ F(g(z)), for all z ∈ D. . , Definition 2.2). Let ψ : C 3 × D → C, a ∈ C, and let h be univalent in U, with h(z 0 ) = a, g be univalent in D, with g(z 0 ) = a, and p be analytic in D, with p(z 0 ) = a. Likewise, ψ(p(z), zp (z), z 2 p (z); z) is analytic in D and F : C → [0, 1], F(z) = |z| 1+|z| . If p is analytic in D and satisfies the (second-order) fuzzy differential subordination i.e., ψ(p(z), zp (z), z 2 p (z); z) ≺ F h(z), or then p is called a fuzzy solution of the fuzzy differential subordination. The univalent function q is called a fuzzy dominant of fuzzy solutions of the differential subordination, or more simply, a fuzzy 1+|q(z)| , or q(z) ≺ F q(z), z ∈ D, for all fuzzy dominants q of (1) or (2) is said to be the fuzzy best dominant of (1) or (2). Note that the fuzzy best dominant is unique up to a rotation in D. Lemma 1 ([25] , Theorem 2.2). Let δ, ω ∈ C, ω = 0, and h be a convex function in D, and F : We suppose that the Briot-Bouquet differential equation and q is the fuzzy best dominant of the fuzzy differential subordination (3) or (4). The confluent (or Kummer) hypergeometric function has been investigated connected to univalent functions more intensely starting from 1985 when it was used by L. de Branges in the proof of Bieberbach's conjecture [26]. The applications of hypergeometric functions in univalent function theory is very well pointed out in the review paper, recently published by H.M. Srivastava [27]. The operator used for obtaining the original results presented in this paper was obtained using a confluent (or Kummer) hypergeometric function and a general operator studied in 1978 by S.S. Miller, P.T. Mocanu and M.O. Reade [28] by taking specific values for parameters β, γ, α, δ: A confluent (or Kummer) hypergeometric function was recently used in many papers for defining new interesting operators as it can be seen in [29][30][31][32]. Two more lemmas from differential subordination theory that are necessary in the proofs of the original results are listed next: , Theorem 4.6.3, p. 84). A necessary and sufficient condition for a function f ∈ H(U) to be close-to-convex is given by: for all θ 1 , θ 2 with 0 ≤ θ 1 < θ 2 < 2π, r ∈ (0, 1). Main Results The new hypergeoemtric integral operator is defined using Definition 3 and the integral operator given by relation (6). Using the method of differential subordination, next, a theorem is proved, giving the best dominant of a certain fuzzy differential subordination. Using specific functions as the fuzzy best dominant, conditions for starlikeness and convexity of the operator M are obtained as corollaries. Theorem 1. For β, γ ∈ C, β > 1, γ > 0, let the fuzzy function F : C → [0, 1] be given by and consider a holomorphic function in U given by the equation when q is a univalent solution in U which satisfies the fuzzy differential subordination: Consider φ(u, v; z), a confluent (or Kummer) hypergeometric function given by (5) and the operator M(z) given by (7). is analytic in U, and and q is the fuzzy best dominant. Remark 4. Using particular expressions for the fuzzy best dominant q, sufficient conditions for starlikeness of the operator M(z) given by (7) can be obtained. If in Theorem 1, function q(z) = 1−z 1+z is considered the following corollary is obtained. is analytic in U, and and q(z) = 1−z 1+z is the fuzzy best dominant. Proof. By using the function q(z) = 1−z 1+z in relation (27) from the proof of Theorem 1, the following fuzzy subordination is obtained: Since Re 1+2ρ cos α+ρ 2 > 0, 0 < ρ < 1, the q is convex, and Re 1−z 1+z > 0, z ∈ U, differential subordination (28) is equivalent to Remark 5. Using the convex function q(z) = 1+z 1−z as the fuzzy best dominant in Theorem 1, sufficient conditions for the convexity of the operator M(z) given by (7) can be obtained as a corollary. Corollary 2. For β, γ ∈ C, β > 1, γ > 0, let the fuzzy function F : C → [0, 1] given by (9) and consider a holomorphic function in U given by the equation where function q(z) = 1+z 1−z is a univalent solution in U which satisfies the fuzzy differential subordination i.e., Consider φ(u, v; z) the confluent (or Kummer) hypergeometric operator given by (5), and the operator M(z) given by (7). If is analytic in U, and and q(z) = 1+z 1−z is the fuzzy best dominant. Proof. Differentiating relation (14), from the proof of Theorem 1, we have which is equivalent to By replacing (32) in (31) we obtain Differentiating relation (33) we get After some computations, we have Using relation (32) in (34) we can write By considering (35) in (30), the following inequality emerges In order to obtain the expected result, Lemma 1 will be used. For that, let ψ : C 2 × U → C, given by (21) and for r = p(z) and s = zp (z) from relation (35), we have ψ p(z), zp (z) = E(u, v; z), z ∈ U. (37) Using (37) in (30), we get Using Lemma 1, for δ = β − 1, ω = γ = 0, we have which, according to Lemma 1, implies Using in (40) relation (32) we have Since q is convex, relation (41) is equivalent to Remark 6. Using Lemma 3 and the convexity property proved for the operator M(z), the following corollary can be stated giving the property of the integral operator M(z) given by (7) to be starlike of order 1 2 . Remark 7. Using function q(z) = 1−2z 1+z as fuzzy best dominant in Theorem 1, we get the following corollary, which gives a sufficient condition for the operator M(z) given by (7) to be convex of order involved in its definition. Using the notion of fuzzy differential subordination and results related to it, in the first theorem proved, the fuzzy best dominant of a certain fuzzy differential subordination is given. Using particular functions as fuzzy best dominants, several corollaries are stated, giving sufficient conditions for the operator M(z) to be starlike, convex, starlike of order 1 2 and convex of order (− 1 2 ), respectively. The second theorem proved shows the property of the operator M(z) to be close-to-convex. For further study, the properties already proved, related to starlikeness and convexity of the operator M(z), could inspire applications in introducing special classes of analytic functions. The operator could also be studied using the dual theory of fuzzy differential superordination, possibly obtaining sandwhich-type theorems, connecting with the present results a usual outcome in geometric function theory. Since particular values for parameters have been used for defining this operator, it might be interesting to try using other values for obtaining certain potentially interesting operators. It being well-known how hypergeometric functions have numerous applications in physics, engineering and statistics, applications of the operators involving those functions could prove useful in other disciplines. The theory of fuzzy differential subordination is still very new and one cannot predict what applications in real life or other scientific domains it might have. Those are subjects for investigation in long-term future studies.
2021-10-15T15:43:32.514Z
2021-10-10T00:00:00.000
{ "year": 2021, "sha1": "c23676eef3fa6352a7fb596fb6e8f6de3791f23d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/9/20/2539/pdf?version=1634182561", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ab4c440f8c6758f3a91056b088df5787ea12a5bc", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [] }
109939612
pes2o/s2orc
v3-fos-license
Context-dependent EMT programs in cancer metastasis In this review, Aiello and Kang discuss the molecular mechanisms, regulatory networks, and functional consequences of epithelial–mesenchymal transition (EMT) in the context of cancer metastasis, with a particular focus on partial EMT and cellular plasticity. Introduction Epithelial-mesenchymal transition (EMT) is a developmental program that facilitates motility in otherwise adherent epithelial cells. During EMT, an epithelial cell sheds its connections to neighboring cells, converts from apico-basal to front-back polarity, and takes on the properties of a migratory mesenchymal cell (Greenburg and Hay, 1982). Both EMT and its reverse process, mesenchymal-epithelial transition (MET), occur throughout development, wound healing, fibrosis, and tumor progression. During embryogenesis, EMT is required for gastrulation, the stage at which epithelial epiblast-derived cells ingress and transition into mesenchymal cells, forming the three germ layers (Carver et al., 2001). At the onset of gastrulation, fibroblast growth factor signaling in the primitive streak activates the EMT-transcription factor (TF) Snail (SNAI1), which in turn transcriptionally represses E-cadherin, resulting in EMT (Nakaya and Sheng, 2008). A similar paradigm plays out repeatedly through development: EMT also occurs during neural crest cell migration (Cheung et al., 2005), somitogenesis (Dale et al., 2006), and cardiac valve formation (Timmerman et al., 2004). In adult organisms, facets of the EMT program are activated in response to cutaneous injury to facilitate collective migration. Keratinocytes at the edge of the wound respond to epidermal growth factor and TGF-β signaling to activate the EMT-TF Slug (SNAI2), which promotes motility and wound closure (Haensel and Dai, 2018). EMT also occurs in pathological conditions, including fibrosis and cancer. In chronic pulmonary obstructive disease, chronic inflammation leads to small-airway fibrosis that appears to be driven by the EMT of bronchial epithelial cells (Jolly et al., 2018). Similarly, EMT of alveolar epithelial cells has been reported as the source of myofibroblasts in idiopathic pulmonary fibrosis (Kim et al., 2006). Likewise in the kidney, tubular epithelial cells undergo EMT to contribute to renal fibrosis (Iwano et al., 2002). Finally, EMT plays a significant role in tumor progression, where it has been implicated in many of the hallmarks of cancer (Hanahan and Weinberg, 2011), particularly metastasis. This review discusses the evolving definition of EMT in the context of cancer as well as its functional consequences. Characteristics and functional consequences of cancer EMT Since EMT enhances cellular mobility, it is no surprise that it has been connected to the dissemination of tumor cells. Indeed, carcinomas often lose epithelial markers or express EMT markers at the invasive front (Brabletz et al., 2001;Vincent et al., 2009;Kahlert et al., 2011;Paterson et al., 2013;Kunita et al., 2018) and in circulating tumor cells (Aktas et al., 2009;Hyun et al., 2016;Lapin et al., 2017), which represent the first steps of the metastatic cascade (invasion and intravasation, respectively). Expression of EMT-TFs correlates with poor clinical outcomes in cholangiocarcinoma, gastric cancer, and breast cancer, among others (Ryu et al., 2012a,b;Jang et al., 2015). This is in part due to EMT's role in promoting metastasis, which will be discussed in detail in a later section; however, it should be noted that while the primary consequence of EMT is increased motility, the phenomenon is also associated with stemness, therapy resistance, and immune evasion. The connection between EMT and stem cell properties was first reported in a study by Mani et al. (2008), which demonstrated that mammary epithelial cells and breast cancer cells that have undergone EMT exhibit stem cell markers (CD44 hi / CD24 lo ) and functional characteristics. When EMT was induced through treatment with TGF-β or overexpression of EMT-TFs, the cells formed more mammospheres (an in vitro test for selfrenewal) and had an increased ability to repopulate a cleared mammary fat pad (in the case of normal mammary epithelial cells) or form tumors (in the case of breast cancer cells; Mani et al., 2008). Similarly, in prostate cancer, EMT was accompanied by an increase in the expression of embryonic stem cell markers as well as an enhanced ability to form spheres in vitro and tumors in vivo (Kong et al., 2010). In a mouse model of breast cancer recurrence, Snail-driven EMT promoted the regrowth of tumors (Moody et al., 2005), and in patients, residual breast tumors left behind after conventional therapy often exhibit EMT and stem cell features (Creighton et al., 2009). Thus, in addition to stemness (or perhaps because of stemness), EMT is strongly associated with therapy resistance. Tumor cells selected for chemoresistance acquire an EMT phenotype (Shah et al., 2007); conversely, tumor cells that are induced to undergo EMT acquire resistance to chemotherapy (Yin et al., 2007), and inhibition of EMT can increase drug sensitivity (Ren et al., 2013;Fischer et al., 2015;Zheng et al., 2015). EMT-TFs confer chemo-and radioresistance through a number of molecular mechanisms, including resistance to apoptosis, enhanced DNA damage repair and altered drug metabolism (van Staalduinen et al., 2018). With the advent of cancer immunotherapy, it has become apparent that EMT also protects tumor cells from immune cell-mediated killing. The first clue was the finding that SNAILinduced EMT promotes melanoma metastasis by induction of regulatory T cell-mediated immunosuppression (Kudo-Saito et al., 2009). Cancer cells that undergo EMT secrete cytokines such as TGF-β, IL-10, and thrombospondin-1 that result in a generally immunosuppressive tumor microenvironment (Yaguchi et al., 2011). Breast cancer cell lines that skew mesenchymal recruit more immunosuppressive T regulatory cells and M2-polarized macrophages and fewer effector and cytotoxic T cells compared with epithelial lines when implanted into an immunocompetent host; moreover, tumors derived from the mesenchymal cell lines are resistant to anti-cytotoxic T lymphocyte-associated protein 4 (CTLA4) immunotherapy (Dongre et al., 2017). EMT has been associated with resistance to cytotoxic T lymphocyte killing due to the interruption of the immunological synapse (Akalay et al., 2013). Another mechanism of EMT-related immune escape is the up-regulation of immune checkpoint proteins on tumor cells such as programmed death ligand-1 (PD-L1), PD-L2, and B7-H3 Lou et al., 2016;Noman et al., 2017). With a hand in seemingly every aspect of tumor progression, EMT is a formidable obstacle in the treatment of cancer. Molecular mechanisms of EMT Many developmental signal transduction pathways are capable of inducing EMT, including the TGF-β, epidermal growth factor, fibroblast growth factor, hepatocyte growth factor, Wingless/ integrated, Sonic hedgehog, and Notch pathways (Li et al., 1994(Li et al., , 2006Miettinen et al., 1994;Kim et al., 2002;Timmerman et al., 2004). Cytokines such as IL-8, IL-6, and TNF-α, often secreted by tumor stroma, can also promote EMT (Sullivan et al., 2009;Wu et al., 2009;Fernando et al., 2011). Tumor cell interactions with extracellular matrix components can also induce EMT. For example, ovarian and prostate cancer cell lines that come into contact with type I collagen up-regulate EMT-TFs Snail and Slug (Cheng and Leung, 2011). Moreover, EMT programs can be activated through mechanotransduction: matrix stiffness, fluid flow, osmotic pressure, and tissue tension all influence the EMT status of cancer cells (Mihalko and Brown, 2018). In the context of breast cancer, dense collagen fibrils increase matrix stiffness, which in turn promotes the nuclear translocation of the EMT-TF Twist1 (TWIST1; Wei et al., 2015). But how do epithelial cells respond to these extracellular signals to achieve the dramatic changes necessary to become motile? At the molecular level, cells going through EMT must repress epithelial genes that contribute to cellular adhesion (adherens junctions, tight junctions, and desmosomes), allowing them to detach from their neighbors. The classic epithelial marker E-cadherin (CDH1), a critical component of the adherens junction, is the most prominent target of repression during the EMT process. A number of EMT-TFs including Snail, Slug, and Zinc-finger E-box binding homeoboxes 1 and 2 (ZEB1/2) directly target CDH1 and other epithelial genes for transcriptional repression (Cano et al., 2000;Comijn et al., 2001;Bolós et al., 2003;Shirakihara et al., 2007). EMT-TFs themselves are repressed by epithelial-associated proteins such as ELF5, TFs Grainyhead-like 2 (GRHL2), and Ovo-like zinc fingers 1 and 2 (OVOL1/2), which help maintain an epithelial phenotype and can drive MET (Chakrabarti et al., 2012;Cieply et al., 2012;Roca et al., 2013). EMT-TFs are also negatively regulated by micro-RNAs (miRNAs), including miR-34, which represses SNAI1, and the miR-200 family, which represses ZEB1 (Bracken et al., 2008;Gregory et al., 2008;Korpal et al., 2008;Park et al., 2008;Siemens et al., 2011). The balance between EMT-TFs and their antagonistic miRNAs plays a critical role in determining where a cell falls on the EMT spectrum as well as its potential for plasticity and metastasis (Celià-Terrassa et al., 2018). A cell undergoing EMT must also activate mesenchymal genes to promote the morphological and behavioral transformations necessary to become migratory. EMT-TFs Twist (TWIST1) and Pair related homeobox 1 (PRRX1) are strong promoters of the mesenchymal transcription program (Yang et al., 2004;Ocaña et al., 2012). Although the Snail and Zeb families of EMT-TFs were originally thought to act only as transcriptional repressors, it has been demonstrated that they can also act as transcriptional activators in certain contexts (Wels et al., 2011;Rembold et al., 2014;Lehmann et al., 2016). EMT-TFs promote the expression of crucial mesenchymal genes such as vimentin (VIM), N-cadherin (CDH2), fibronectin (FN1), and fibroblast-specific protein 1 (FSP1). These downstream mesenchymal targets reshape the cytoskeleton and cell membrane to allow for migration. Beyond transcriptional control of EMT Despite the intense focus within the field on EMT-TFs, it has become increasingly apparent that the mechanisms behind this process are not limited to the transcriptional level. EMT is facilitated through many levels of regulation, from epigenetic to posttranslational modifications and every step in between ( Fig. 1). DNA methylation of the CDH1 promoter is one mechanism of E-cadherin repression: ZEB1 interacts with DNA methyltransferase 1 (DNMT1) to accomplish this (Fukagawa et al., 2015). EMT-TFs can also recruit histone-modifying enzymes, including the histone demethylase LSD1, histone deacetylases HDAC1/2, and the polycomb repressive complex (PRC2), to repress the CDH1 promoter (Herranz et al., 2008;von Burstin et al., 2009;Lin et al., 2010;Aghdassi et al., 2012;Skrypek et al., 2017). At the RNA level, alternative splicing by Epithelial splicing regulatory proteins 1 and 2 (ESRP1/2), RNA binding motif protein 47 (RBM47), Quaking, RNA binding fox-1 homologue 2 (RBFOX2), and Muscleblind-like splicing regulator 1 (MBNL1) play a role in regulating EMT. ESRP1/2 and RBM47 promote epithelial-specific splicing, while the latter proteins promote mesenchymal-specific splicing (Shapiro et al., 2011;Venables et al., 2013;Yang et al., 2016;Neumann et al., 2018). At the posttranscriptional level, besides miRNAs, long noncoding RNAs (lncRNAs) also contribute to EMT regulation. For example, lncRNA-activated by TGF-β (lncRNA-ATB) and lncRNA-PNUTS have been suggested to act as sponges for the miR-200 family and miR-205, respectively, sequestering these miRNAs to prevent them from inhibiting EMT-TF transcripts Grelet et al., 2017). Another EMT-promoting lncRNA, translational regulator lncRNA, negatively regulates the translation of CDH1 mRNA (Gumireddy et al., 2013;Dhamija and Diederichs, 2016). Similarly, cytoplasmic polyadenylation element binding protein 1 (CPEB1) promotes the shortening of the polyA tail of matrix metalloprotease 9 (MMP9), which reduces its translation; upon deletion of CPEB1, mammary tumor cells undergo EMT and become more metastatic (Nagaoka et al., 2016). In contrast, embryonic lethal abnormal vision-like RNA binding protein 1 (ELAVL1 or HUR) promotes EMT by stabilizing Snai1 mRNA (Zhou et al., 2016). Posttranslational regulation of EMT-TFs is also an important level of control. SNAI1 and TWIST1 can be acetylated by p300, which modulates their stability, localization, and interactions with other proteins (Shiota et al., 2010;Chang et al., 2017). SNAI1, SNAI2, TWIST1, and ZEB1 can also be phosphorylated, which affects their stabilization/ degradation. Glycogen synthase kinase 3 β (GSK3β) and PKD1 phosphorylate Snail and Slug to promote their degradation, while MAPKs and Ataxia-Telangiesctasia mutated serine/threonine kinase phosphorylate TWIST1 and ZEB1, respectively, to stabilize them Hong et al., 2011;Kim et al., 2012;Zhang et al., 2014;Zheng et al., 2014). ZEB1 is regulated by the E3 ubiquitin ligase SIAH as well, which marks it for degradation . Epithelial proteins can likewise be posttranslationally regulated during EMT. The E3 ubiquitin ligase Hakai ubiquitinates E-cadherin, inducing its endocytosis Figure 1. Layers of EMT regulation. EMT is regulated at the epigenetic, transcriptional, posttranscriptional, translational, and posttranslational levels. EMT-TFs recruit DNA methylation and histone modification machinery to stably repress epithelial genes and prevent their transcription. They are opposed by epithelial-associated TFs, which in turn repress EMT-TFs. Both epithelial and mesenchymal transcripts are alternatively spliced and regulated by miRNAs and lncRNAs. Translation initiation, mRNA stability, and polyadenylation affect the translation rate of epithelial and mesenchymal transcripts. Posttranslational modifications such as ubiquitylation, acetylation, and phosphorylation determine the balance between stability and degradation of epithelial and mesenchymal proteins. and destruction (Fujita et al., 2002). During hepatocyte growth factor-mediated EMT, E-cadherin can be phosphorylated by PKCδ, which disrupts its interaction with β-catenin on the cytoplasmic side and with E-cadherin on adjacent cells . The E-cadherin protein also contains four asparagine (Asn) residues that are N-glycosylated, two of which are critical to its adhesive function: site-directed mutagenesis of Asn-554 and Asn-566 significantly reduced adhesion and enhanced the migratory ability of a breast cancer cell line (Zhao et al., 2008). EMT is a tightly regulated process because the consequences of an aberrant transition are significant, especially in the context of cancer. EMT/MET model of metastasis The EMT/MET model of metastatic dissemination attempts to reconcile the seemingly contradictory observation that metastatic lesions tend to have epithelial features, much like the primary tumor they arose from, but epithelial cells are not inherently invasive. The model postulates that cancer cells of epithelial origin undergo EMT to achieve the first steps of the metastatic cascade, including invasion into the tumor stroma, intravasation, and possibly extravasation at distant organs; however, in order to successfully form secondary tumors, cancer cells must undergo the reverse process of MET after reaching the metastatic site. Just like EMT, which is often induced by factors produced by stromal cells at the invasive front, MET is often an active process stimulated by molecular cues from metastatic niches in secondary organ sites (Gao et al., 2012;Del Pozo Martin et al., 2015;Esposito et al., 2019). While there is strong support in the literature for the role of EMT/MET in metastasis, the model continues to be challenged and updated with new research findings. This section presents results from these studies and synthesizes competing views for the EMT/ MET model of metastasis. EMT occurs during the natural history of tumor progression The first challenge for the EMT/MET model was the difficulty in identifying tumor cells that had undergone EMT in vivo. Pathologists had long noted the mesenchymal morphology of what seemed to be cancer cells at the invasive front of tumors, but the origin of those cells was unclear because a mesenchymal tumor cell is for the most part indistinguishable from a mesenchymal stromal cell. However, with integration of the lineage labeling technique into the cancer field, it became possible to detect tumor cells that had undergone EMT spontaneously in vivo. The first direct evidence for EMT in breast cancer was reported by Trimboli et al. (2008). In this study, whey acidic protein (Wap)-Cre was used to genetically label mammary epithelial cells with LacZ. These strains were crossed to three murine models of breast cancer: Wap-myc, mouse mammary tumor virus (MMTV)-neu, and MMTV-polyoma middle T antigen (PyMT). In Wap-Cre/Wap-myc tumors, LacZ + cells could be found in the stroma (i.e., cells of epithelial origin that acquired mesenchymal characteristics); however, this was not seen in MMTV-neu or MMTV-PyMT tumors, suggesting that EMT occurs in Mycdriven metastasis but not in Neu-or PyMT-induced metastasis (Trimboli et al., 2008). Rhim et al. (2012) used a similar method to demonstrate that pancreatic cells of epithelial origin undergo EMT and disseminate not only in the context of cancer but even at the preneoplastic stage. In this study, a YFP genetic label was used in conjunction with the Kras G12D /p53 fl/+ /Pdx1-Cre mouse model (KPCY) of pancreatic ductal adenocarcinoma (PDAC) to show that tumor cells with mesenchymal features can be found in the primary tumor stroma, circulation, and metastatic sites (Rhim et al., 2012). Upon careful inspection of disseminated tumor cells in the liver of this mouse model, it was revealed that EMT features are predominant in small metastatic lesions, but large lesions tend to have epithelial characteristics, consistent with an EMT-to-MET switch (Aiello et al., 2016). EMT is a driver of metastasis While these reports are suggestive that EMT and MET are important for metastasis, they do not address whether either is necessary and sufficient for this process. Tsai et al. (2012) elegantly demonstrated that Twist1-mediated EMT in squamous cell carcinoma was sufficient to drive dissemination, but Twist1 had to be down-regulated at metastatic sites for colonization to occur. Using a doxycycline-inducible Twist1 construct to induce EMT at the primary site either locally (by topical application) or systemically (by oral administration), the authors showed that Twist1 expression (and thus EMT) drove dissemination. However, metastatic colonization occurred only in mice that had locally activated Twist1, which could be reversed at metastatic sites that were not exposed to doxycycline (Tsai et al., 2012). Similarly, Snail expression is sufficient to drive breast cancer cells into the circulation, but it must be down-regulated once those cells reach the lung in order for the cells to successfully colonize (Tran et al., 2014). Along those lines, Takano et al. (2016) determined that isoform switching of Prrx1 from the EMT-promoting Prrx1b to the MET-promoting Prrx1a is necessary for liver metastasis in pancreatic cancer. Likewise, EMT driven by loss of p120-catenin accelerates pancreatic tumor progression and distant metastasis; however, p120catenin-mediated stabilization of E-cadherin (and therefore MET) is required for metastatic colonization of the liver (Reichert et al., 2018). Interestingly, experimental lung metastasis did not require restoration of p-120 catenin, suggesting that MET is not required in this context. Along similar lines of inquiry into the requirements of EMT and MET for metastasis, Title et al. (2018) deleted it in β cells of a mouse model of neuroendocrine cancer to determine whether loss of the miR-200 family is sufficient to drive metastasis. The authors found that although mir-200 ablation increased survival, the resulting tumors metastasized more frequently, presumably due to increased EMT. The authors also deleted the miR-200 sites within the Zeb1 39 untranslated region to promote EMT and found that this model phenocopies miR-200 ablation. Similar results were also seen in the KPC mouse model of PDAC with miR-200 deletion (Title et al., 2018). While suppression of the miR-200 family can promote dissemination, it is also an important driver of metastatic colonization. In addition to suppressing Zebs, the miR-200s target Sec23 homologue A (Sec23a), a gene important for the secretion of metastasis-suppressive proteins such as Tinagl1 (Shen et al., 2019). Through this dual mechanism, ectopic expression of the miR-200 family suppresses tumor migration and invasion but promotes lung colonization (Korpal et al., 2011). These studies demonstrate that EMT drives dissemination and MET is necessary for metastatic colonization. Context-dependent requirement of EMT for metastasis To address the question of whether EMT is required for metastasis, Zheng et al. (2015) generated Snail and Twist knockout mice on the background of the Kras G12D /p53 R172H/+ /Pdx1-Cre (KPC) PDAC model. Surprisingly, neither Snail nor Twist deletion resulted in a significant decrease in metastasis, suggesting that EMT is not required for pancreatic cancer metastasis (Zheng et al., 2015). However, a similar study in which Zeb1 was deleted in the same mouse model reported a significant reduction in metastasis, reinvigorating the debate over whether EMT is required for metastasis (Krebs et al., 2017). Snail does seem to be critical for breast cancer metastasis, however, as conditional deletion of Snail in the context of the MMTV-PyMT model significantly reduced lung metastasis (Tran et al., 2014). On the other hand, a report that used lineage labeling called into question the role of EMT in breast cancer metastasis. Fischer et al. (2015) used Fsp1-Cre to genetically label MMTV-PyMT breast cancer cells that undergo Fsp1-mediated EMT with GFP and found that most lung metastases did not express GFP, suggesting that EMT is not required in this context (Fischer et al., 2015). Similarly, an EMT lineage label driven by either α-smooth muscle actin-Cre or Fsp1-Cre was not activated in macrometastatic lesions in an Flp-FRT mouse model of PDAC (Chen et al., 2018). However, the results of these two studies contradict an earlier report that used an Fsp1 knock-in GFP reporter and suicide construct (thymidine kinase) on an MMTV-PyMT background (Xue et al., 2003). The resulting metastatic lesions contained cells that were double positive for the mammary cell marker casein and GFP, suggesting that EMT occurred during tumor progression. Moreover, replacement of endogenous Fsp1 with GFP or ablation of FSP1 + cells with the nucleoside analogue ganciclovir both significantly abrogated lung metastasis. Thus, evidence supporting or arguing against an essential role of EMT in metastasis have been presented in different studies (Table 1); however, looking at the question from a contextual perspective can shed some light on the issue. Context matters There does not appear to be a single unifying molecular definition of EMT, especially in the context of cancer. Carcinomas use diverse programs to achieve the same goal of generating migratory tumor cells with the capability to invade, metastasize, and evade therapy. Cells that have undergone EMT in different tumor types (sometimes even within the same tumor) may look similar morphologically but can have dramatically divergent gene expression profiles. This might explain the apparently incongruous results of the aforementioned studies of EMT's role in metastasis (Li and Kang, 2016;Aiello et al., 2017;Ye et al., 2017). Context-dependent manifestations of the EMT program were recently covered in an excellent review that discusses the nonredundant functions of various EMT-TFs (Stemmler et al., 2019). Of note, while Twist appears to be a critical EMT-TF in breast cancer (Yang et al., 2004;Mani et al., 2008), its role in PDAC EMT is not strongly supported (Hotz et al., 2007). Instead, Zeb1 seems to the primary EMT-TF responsible for PDAC EMT, which would explain why its deletion significantly reduced EMT and metastasis (Krebs et al., 2017), whereas Twist deletion did not (Zheng et al., 2015). Likewise, while FSP1 is a reliable marker of EMT in the KPC model of PDAC (Aiello et al., 2016), its usefulness as an EMT marker in breast cancer varies by model (Trimboli et al., 2008), which could explain why Fsp1 lineage-labeled tumor cells could not be found in lung metastases (Fischer et al., 2015). Thus it is likely that inconsistent findings within the EMT/metastasis field are due to different requirements for EMT effectors depending on tumor type, or due to an incomplete or partial EMT phenotype that does not involve EMT-TFs. Partial EMT Emerging evidence suggests that EMT is a spectrum, and cancer cells often fall somewhere between fully epithelial and fully mesenchymal (Nieto et al., 2016). Tumor cells rarely commit to full EMT except in rare cases such as in hereditary diffuse gastric cancer and hereditary lobular breast cancer, where germline CDH1 mutations and subsequent epigenetic inactivation results in irreversible EMT (Barber et al., 2008;Dossus and Benusiglio, 2015). More often, tumor cells exhibit partial EMT, which could manifest as the coexpression of epithelial and mesenchymal markers or the loss of epithelial markers without gain of mesenchymal markers. Partial EMT appears to confer tumor cells with enhanced epithelial-mesenchymal plasticity, which is imperative for metastasis, tumor recurrence, and therapy resistance (Fig. 2). Therefore, it is critical to acknowledge and investigate the myriad ways tumor cells move through the EMT spectrum to address the issue in the clinic. A recent study reported that a partial EMT phenotype was predominant in the KPCY mouse model of PDAC, especially in association with the classic subtype . Using a lineage-labeling strategy, CDH1 + epithelial and CDH1 − mesenchymal tumor cells were isolated from autochthonous PDAC tumors and subjected to RNA sequencing, which revealed that in a majority of tumors, Cdh1 mRNA was maintained in cells that had lost it at the protein level. In contrast to the common EMT mechanism of EMT-TFs transcriptionally repressing the epithelial program, epithelial proteins like CDH1 were internalized in RAB11 + recycling endosomes. This potentially results in a cell poised for MET: indeed, CDH1 − cells derived from tumors that undergo partial EMT were found to be more capable of generating CDH1 + cells compared with cells from tumors that typically undergo full EMT. Interestingly, partial EMT tumor cell lines engaged in collective migration in vitro and generated more circulating tumor clusters in vivo compared with full EMT tumor lines, which disseminated as single cells in vitro and in vivo. Moreover, a partial EMT phenotype was detected in a number of human pancreatic, breast, and colon cancer cell lines, suggesting that partial EMT is a common feature of carcinomas. Single-cell RNA sequencing (scRNA-seq) is a powerful technique that overcomes the ambiguity of population transcrip tomics. Recently, this method has been used to demonstrate the coexpression of both epithelial and mesenchymal genes at single-cell resolution during development and tumor progression, revealing that partial EMT occurs naturally in vivo. During murine organogenesis, scRNA-seq transcriptional profiling has identified intestine, liver, and lung cells with a partial EMT phenotype at embryonic day 9.5-11.5 (Dong et al., 2018). These cells express epithelial markers, including CDH1, EPCAM, claudins, and cytokeratins, as well as mesenchymal markers VIM, FN1, and SPARC. Interestingly, partial EMT cells had very low expression of classic EMT-TFs (SNAI1/2, ZEB1/2, and TWIST1/2), which is consistent with the expression of epithelial genes but suggests alternative mechanisms for inducing mesenchymal transcription. Partial EMT cells were not observed in adult intestine, liver, or lung; therefore, they probably represent a EMT is a spectrum of epithelial and mesenchymal phenotypes. Partial EMT, which typically involves a combination of epithelial and mesenchymal gene expression, facilitates cluster migration/dissemination, plasticity between epithelial and mesenchymal states, and even plasticity in cell fate (i.e., transdifferentiation to adipocytes). Aiello and Kang Journal of Experimental Medicine transient population during development. Dong et al. (2018) also investigated two previously published scRNA-seq datasets in primary breast cancer and lung adenocarcinoma patient-derived xenografts for evidence of partial EMT and found tumor cells that coexpressed VIM, FN1, EPCAM, and CDH1. Whether tumor cells arrive at partial EMT the same way that cells do during organogenesis remains an open question. In the same vein, Puram et al. (2017) surveyed primary and metastatic head and neck squamous cell carcinomas (HNSCCs) using scRNA-seq and identified a subset of malignant cells with a partial EMT signature. These cells bore some classic features of EMT, including expression of VIM, TGFβ-induced (TGFBI), and extracellular matrix genes, but expression of EMT-TFs was notably low, and consequently, epithelial gene expression was maintained. Using TGFBI expression to isolate partial EMT cells from an HNSCC cell line, the authors demonstrated that partial EMT cells are invasive and highly plastic. In situ, partial EMT cells were found at the leading edge of tumors near cancer-associated fibroblasts. Moreover, the partial EMT signature correlated with a malignant basal HNSCC subtype and lymph node metastasis, suggesting that partial EMT promotes loco-regional invasion. Pastushenko et al. (2018) investigated the EMT spectrum in lineage-labeled primary murine skin and mammary tumors using a panel of cell surface markers to identify cells in intermediate EMT states. Using different combinations of the markers CD106, CD51, and CD61, the authors distinguished six populations within the YFP + /EPCAM − (presumably mesenchymal) compartment located throughout the EMT spectrum. Immunostaining revealed that these populations all expressed the mesenchymal marker VIM but varied in their expression of the epithelial marker cytokeratin-14. scRNA-seq further demonstrated the heterogeneity of EMT features among the six EPCAM − populations. Transplantation assays revealed that all EPCAM − populations had higher tumor-initiating capacity compared with EPCAM + cells, but in vitro experiments showed that they varied in plasticity. Tumor cells in the middle of the EMT spectrum exhibited the most plasticity (i.e., were able to generate cells of all six subpopulations after sorting). Interestingly, although cells with a partial EMT phenotype were readily able to disseminate hematogenously, they did not have an increased ability to colonize the lung, suggesting that factors besides MET contribute to metastatic ability in this context (Pastushenko et al., 2018). This study paints a picture of the remarkably heterogeneous nature of EMT within tumors and begs the question of how it is established. Ishay-Ronen et al. (2019) recently demonstrated that the partial EMT state makes breast cancer cells conducive to transdifferentiation into adipocytes. In this study, combination treatment with rosiglitazone (a peroxisome proliferatoractivated receptor-γ agonist to induce adipocyte differentiation) and trametinib (a kinase inhibitor to block TGF-β signaling) was sufficient to convert partial EMT breast cancer cells into benign, cell cycle-arrested adipocytes, significantly reducing lung metastasis (Ishay-Ronen et al., 2019). This intriguing report raises the possibility that cancer cell plasticity, which seems to be at its peak during partial EMT, could be exploited therapeutically. Perspectives and future directions EMT is a dynamic, highly regulated process that occurs during embryogenesis and tumor progression to endow epithelial cells with motility, stemness, and therapy resistance. Its role in metastasis has been hotly debated, with recent reports both refuting and supporting the requirement for EMT. This is likely due to context-dependent EMT mechanisms, but further investigation of EMT effectors will be necessary to clarify the situation. Partial EMT is emerging as a common manifestation of the EMT program in tumors and can bestow cancer cells with increased plasticity. At this point, the EMT field has collected an impressive body of knowledge on the signals that can promote EMT as well as effector proteins and their regulators. However, it remains unclear under which contexts these molecular players are involved in the natural history of tumor progression. Considering the emerging evidence that tumor types vary widely in their requirements for EMT-TFs and effectors in order to undergo EMT (or partial EMT), a systematic investigation of the necessity for each EMT driver in different in vivo tumor models would significantly enhance our understanding of how EMT actually happens in a living tumor. Moreover, this information would inform future experiments to determine whether EMT is required for metastasis: a clearer picture of which EMT programs are active in a given tumor type will be critical to successfully block EMT or eliminate cells that have undergone EMT. Only then will it be possible to definitively answer the question of whether EMT is crucial for metastasis in different cancer types and subtypes. In the context of cancer, full EMT is rarely achieved, but rather partial EMT is more common, with cancer cells falling along a spectrum of epithelial and mesenchymal traits. What are the molecular mechanisms that cause a cell to begin down the path to EMT and stop partway there? It probably depends on a combination of cell-autonomous and non-cell-autonomous factors: perhaps microenvironmental cues, the transcriptomic landscape of the cell, or chromatin accessibility (which could be attributed to the cell of origin). What are the functional consequences of partial EMT? Recent reports suggest it pushes cells toward collective migration, but how does partial EMT relate to other facets of the EMT program such as stemness? Recent studies also started to appreciate the importance of EMT dynamics in mediating its biological impact in stemness and metastasis (Celià-Terrassa et al., 2018), an area that will benefit from systems and computational biology approaches of investigation. Epithelial-mesenchymal plasticity seems to be important for allowing tumor cells to adapt to their ever-changing microenvironment, whether they find themselves in a distant organ or bombarded with a new therapy. Partial EMT appears to bestow dramatic plasticity on tumor cells, giving them the ability to transdifferentiate into an entirely different cell type, according to one recent report (Ishay-Ronen et al., 2019). Therapeutically targeting partial EMT, and thus cellular plasticity, could prove to be efficacious; however, the molecular mechanisms governing partial EMT must be parsed first. EMT contributes to nearly all of the hallmarks of cancer and continues to be an attractive target for cancer therapy. However, because the reverse process of MET appears to be critical for metastatic colonization, there has been justifiable hesitation to translate EMT inhibitors into the clinic for fear of stabilizing micrometastases. Moreover, it would be a challenge to address the problem of intra-and intertumor variation in EMT mechanisms. Nevertheless, a thoughtfully designed anti-EMT therapy combined with chemo-, radio-, and/or immunotherapy could optimize treatment outcome. Potential anti-EMT strategies could include reversing EMT, directly targeting cells that have undergone EMT, or inducing transdifferentiation of EMT cells into a harmless cell type as undertaken in Ishay-Ronen et al. (2019). Direct targeting of EMT tumor cells has proved challenging due to their general drug resistance and the difficulty of drugging TFs such as the EMT-TFs. Targeting the downstream effectors of EMT/MET or the overall plasticity of cancer cells might be a more precise alternative for therapeutic development. For example, Tinagl1 is a secreted metastasis-inhibitory protein that is down-regulated by miR-200s during MET. Therapeutic treatment with recombinant Tinagl1 reduces tumor progression and metastasis while avoiding the induction of EMT by targeting miR-200s directly (Shen et al., 2019). Finally, the transdifferentiation strategy would be valuable at any stage, perhaps even more so in the context of metastasis, since these lesions are especially refractory to treatment (Ishay-Ronen et al., 2019). These approaches should be combined with other established treatments for maximum effect. For example, since EMT is associated with immunosuppression, anti-EMT therapy could precede immunotherapy to sensitize the tumor to treatment. As the spectrum of EMT continues to be refined, hopefully novel druggable targets will be identified to augment current treatments.
2019-04-13T13:02:47.188Z
2019-04-11T00:00:00.000
{ "year": 2019, "sha1": "56e19e298030482e86a48ad4053b623cd273c3ce", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jem/article-pdf/216/5/1016/1172934/jem_20181827.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c921e7dc82c401c98e273030440bef7c04099d94", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
211123619
pes2o/s2orc
v3-fos-license
A Multibranch Search Tree-Based Multi-Keyword Ranked Search Scheme over Encrypted Cloud Data In the interest of privacy concerns, cloud service users choose to encrypt their personal data before outsourcing them to cloud. However, it is difficult to achieve efficient search over encrypted cloud data. Therefore, how to design an efficient and accurate search scheme over large-scale encrypted cloud data is a challenge. In this paper, we integrate bisecting k-means algorithm and multibranch tree structure and propose the α-filtering tree search scheme based on bisecting k-means clusters. The novel index tree is built from bottom-up, and a greedy depth first algorithm is used for filtering the nonrelevant document cluster by calculating the relevance score between the filtering vector and the query vector. The α-filtering tree can improve the efficiency without the loss of search accuracy. The experiment on a real-world dataset demonstrates the effectiveness of our scheme. Introduction Cloud computing is a new model in IT enterprise which can offer high-quality calculation, storage, and application capacity. e cloud customers choose to outsource their local data and computation to the cloud server for minimizing the data maintenance cost. us, to protect users' privacy and achieve efficient and precise data retrieving from the cloud server has become the focus of recent works. e traditional way to protect data privacy is to encrypt the original data. However, this is a very challenging task for data utilization. e search schemes based on ciphertext [1][2][3][4][5][6][7] can guarantee the data privacy but the search algorithms have high time and space complexity, which are not suitable for cloud data retrieval. To solve this problem, researchers proposed a series of searchable encryption schemes [8][9][10][11][12][13] based on the theory of cryptography. ese encryption schemes either do not have high-accuracy retrieval results [8][9][10]12] or cost a lot of time and space overhead [8,11]. erefore, it is a necessity to design an efficient and useable search scheme. In this paper, we propose an α-filtering tree search scheme based on bisecting k-means clusters, which achieves efficient multi-keyword ranked search over encrypted cloud data. We use vector space model and TF-IDF model to build the keyword dictionary and transform the documents and keywords into "points" in a multidimensional space that can be described by vectors, and then we used the secure inner product to encrypt the document vectors and query vectors. e relevance scores between the document vectors and query vectors are used to obtain the top-k most relevant documents. Our paper's main contributions are summarized as follows: (i) We integrate the bisecting k-means algorithm and a multibranch tree structure where the bisecting kmeans algorithm is used to improve the cluster accuracy, and we propose an α-filtering tree search scheme based on the bisecting k-means clusters. (ii) We propose a greedy depth first algorithm to achieve searches on the α-filtering tree, which improves the multi-keyword search efficiency. By adopting the secure inner product encryption scheme, we achieve the privacy-preserving ranked search on the encrypted α-filtering index tree. (iii) We perform the experiment on a real-world dataset and compare with existing schemes in terms of retrieval efficiency and index storage usage. e result shows that our scheme is superior in search efficiency and storage usage. e rest of the paper is organized as follows: Section 2 introduces the related work, Section 3 introduces the main background knowledge, and Section 4 gives a brief introduction to our system model, threat model, and design goals. e constructions of the α-filtering tree and search algorithm are presented in Section 5. Sections 6 and 7 give the experiment result and its analyses. Finally, the conclusion is given in Section 8. Related Work Searchable encryption schemes implement keyword searches over encrypted outsourced data, which allow users to store their personal data on the cloud server without privacy concerns. Recently, an increasing number of scholars conduct research in this area. We discuss the related work on the development of searchable encryption schemes' performance and function. Single-Keyword Searchable Encryption. Song et al. [14] first proposed a symmetric encryption search scheme, and they encrypted each keyword in the document set separately and searched the entire data set by sequential scanning. us, the search time of the scheme is linear with the overall size of the document set. Goh [15] proposed a searchable encryption scheme based on Bloom filter. ey achieved search efficiency, that is to say, the calculation overhead is not related to the number of documents in the dataset. However, due to the probability of a false positive for the Bloom filter, the cloud server may return documents that do not contain search keywords. e scheme in Chang and Mitzenmacher [16] used two indexes. e first index is to store and manage a premade dictionary by the user. e second requires twice the interactions between the user and the cloud server, which affects the user experience but it can achieve the same search efficiency as Goh [15]. Curtmola et al. [17] adopted two novel search schemes SSE-1 and SSE-2. SSE-1 is used to prevent chosenkeyword attacks (CKA1), and SSE-2 is against adaptive chosenkeyword attacks (CKA2). eir schemes' search time cost is proportional to the number of keywords retrieved. Boneh et al. [18] adopted a searchable encryption structure that allows everyone to store their data with the public key. But their scheme needs large amount of calculation. Multi-Keyword Search Schemes. Multi-keyword searchable encryption allows the user to submit multiple search keywords to retrieve the most relevant documents. ey can be further classified into ranked search and traditional search. In traditional search, most schemes are conjunctive keyword search which returns all the documents containing the search keywords, and conjunctivesubset keyword search which returns the documents containing the keyword subset. However, traditional search is not suitable for the ranked search. Cao et al. [8] first achieved a privacy-preserving multi-keyword ranked search scheme. In their scheme, documents and search keywords are described by the dictionary-scale vectors. e scheme uses coordinate matching to rank the documents. Since the weights of different keywords in documents are not considered, the retrieval result obtained by the scheme lacks accuracy, and the search time of the scheme is linear with the scale of the dataset. Sun et al. [9] proposed a novel multi-keyword ranked scheme; they used TF-IDF vector space model and cosine distance measurement to build an index tree structure. e experiment shows that their scheme is more efficient than linear search but lacked accuracy. Orencik et al. [10] adopted Locality-Sensitive Hashing to cluster the similar documents, but their ranked search result is also not accurate. Xia et al. [11] adopted the vector space model and KBB tree to build a dynamic multi-keyword ranked search scheme, which more precisely obtains the ranked result. However, as the scale of the documents increases, the index tree space cost is large, and the pruning effect of the search algorithm is also reduced, resulting in a decrease in search efficiency. To enhance searchable encryptions' usability and functionality, many schemes that support fuzzy keyword search [19][20][21][22][23], conjunctive keyword search [3,[24][25][26], and similarity search [27][28][29][30] have also been presented. e dynamic scheme can support updates on the dataset, which largely enhances searchable encryptions' usability. e first dynamic searchable encryption scheme is proposed by van Liesdonk et al. [31], which supports a limited number of updates. After their work, many dynamic searchable encryption schemes are proposed [32][33][34][35]. e verifiable scheme can check the integrity of search results when the cloud server is not honest. Many researches are conducted to support verifiable searches in [26,36,37]. To extend the searchable encryption scheme to support other data types like multimedia data, some research works were also proposed [38,39]. In Xia et al.'s work [11], they presented an efficient search index tree to obtain the search result. However, as the scale of the document increases, the index tree space cost is large and the pruning effect of the search algorithm is also reduced, resulting in a decrease in search efficiency. us, we proposed a multibranch index tree to overcome this problem. By adopting the clustering algorithm over the document set, we can further increase the search efficiency. Moreover, the multibranch tree can also save the space cost for the index tree. Vector Space Model. Among many information retrieval models, the vector space model is the most popular method of relevance measurement and we adopt the TF-IDF model for feature extraction. It is widely used in plaintext multikeyword retrieval. TF (term frequency) refers to the word frequency, that is, the number of occurrences of the keyword w in the document f divided by the total number of words |f| contained in the document f. IDF (inverse document frequency) indicates the inverse document frequency, that is, the number of documents divided by the number of documents containing the keyword. e keyword dictionary is first generated by filtering the stop words form all the words contained in the document set D. en, the document vector F V and the query vector q are generated according to the keyword dictionary W. e dimension of F V and q is equal to the scale of the keyword dictionary; each dimension represents a corresponding keyword w i . e value of each dimension in F V means the normalized TF value and normalized IDF value in q. e TF value and the IDF value of the keyword w i are calculated as follows: where IDF w ′ � ln(1 + N/N w ) and TF f,w ′ � N f,w /|f|. Relevance Measurement. e inner product operation is performed by two equal-length vectors, and the relevance between two vectors is quantized by the inner product score. e larger the score, the higher the relevance between the two vectors. e relevance score is calculated as follows: (2) We make the following instructions about equation (2): is the relevance score between the document and the search keywords. (ii) IF F V is a filtering vector of the index tree node and V Q is a search vector, Score (F V , V Q ) is the relevance score between the upper bound vector of the documents stored in this node and the search keywords. Bisecting k-Means Cluster. In data mining, the bisecting k-means algorithm is a cluster analysis algorithm. By selecting 2 initial centroids in a bisecting k-means algorithm, each point is assigned to the nearest centroid in turn, and the points that are assigned to the same centroid form a cluster. e centroids of each cluster are continually updated by different points assigned to the cluster, assignments and updates are repeated until the clusters no longer change, and then the clustering algorithm is completed. We use the cosine distance to measure the distance from the point to the centroid, which is defined in the following equation: where x → is the point's vector, y → is the centroid's vector, and ‖ x → ‖ and ‖ y → ‖ are the norms of x → and y → . Secure Inner Product Operation. e special matrix encryption proposed in [8] can achieve privacy-preserving vector inner product. Assuming that p and q are two ndimensional vectors, the user encrypted them to p and q by calculating M T p and M − 1 q, where M is a random n × n invertible matrix. erefore, we can get the inner product of the original vectors only by the inner product of their encrypted form p · q as follows: (4) System Model. In this paper, there are 3 entities in our system model: data owner, data user, and cloud server as shown in Figure 1. ese three entities collaborate as follows. e data owner has the local dataset D and wants to outsource them in secure form to the cloud server while still providing the search service for users. In our scheme, it first generates the searchable index tree I according to D. en, it uses the secure key to encrypt both D and I to its encrypted Security and Communication Networks form D and I. After that, it shares the secure key with the data user through the access control and outsources D and I to the cloud server. e cloud server provides both storage service and search service. It stores the secured index tree I and encrypted document set D. After it receives the search trapdoor T Q from the data user, it performs the secure search by using I and returns the search result to the data user. e data user is the authorized one to access the document set. It generates the search trapdoor T Q with search keywords Q through the proposed search scheme and sends T Q as the search request to the cloud server. After receiving the search result, it uses the secure key to decrypt the encrypted documents and get the plaintext documents. reat Model. We adopt the same "honest-but-curious" threat model as the current work [8,9,11,40,42]. at is to say, the cloud server follows the user's instruction honestly and precisely, but it could curiously analyze the received data to obtain additional information about the dataset. Two threat models were proposed by Cao et al. [8] and are adopted in our work as follows: Known Cyphertext Model: the cloud server could access the cyphertext dataset, the encrypted index tree, and the search trapdoor, and thus the cloud server can conduct the cyphertext-only attack. Known Background Model: the cloud server could have more dataset-related information than the known cyphertext model in this stronger model. e cloud server can have statistical information about the relation between the search trapdoor and the search result. en, it could infer or recognize some of the search keywords in the trapdoor by the additional information it has. Design Goals. To ensure the privacy, efficiency, and accuracy in the multi-keyword ranked search over encrypted cloud data, our system design should meet these requirements as follows: Search Efficiency: compared with other multi-keyword search schemes, the proposed search scheme should be superior in efficiency than others. Search Accuracy: the proposed search scheme should guarantee the accuracy of the search result. Privacy Persevering: the proposed scheme should ensure the privacy of the document privacy, index privacy, trapdoor privacy, trapdoor unlinkability, and keyword privacy in the search process. Index and Search Algorithm In this section, we mainly discuss the index construction method and search method based on the index tree and then we give the corresponding algorithms. We first construct a document atom cluster list by using the bisecting k-means algorithm. en, based on the generated atom cluster list, we build the α-filtering tree and then propose a corresponding greedy depth first search algorithm for multi-keywords ranked search. Atom Cluster List Generation Algorithm. Considering the document set D as the input raw cluster, we use the bisecting k-means algorithms to perform top-down bisecting clustering until all the generated subclusters contain less than μ documents in Algorithms 1 and 2, and thus a binary clustering tree is built as shown in Algorithm 2. Here, μ is the given threshold for clustering. en, we traverse the leaf clusters in the generated binary clustering tree, and the atom cluster list L is constructed in Algorithm 3, which is used for building the α-filtering index tree. Definition 1. Atom Cluster. e leaf clusters in the binary tree generated by Algorithm 1 are the atom clusters, where the number of documents in each atom cluster is no more than μ. Assuming that the list of the atom clusters generated by Algorithm 1 is L � {C 1 , C 2 , . . ., C t }, we have the following properties We illustrate the generation process of the atomic cluster list L in Algorithms 1-3 by an example. We assume that the document set is D � {d 1 , d 2 , . . ., d 15 } and μ � 3. e first round of bisecting clustering is performed on D, and two subclusters are generated as shown in Figure 2. With the same process, the second layer's and the third layer's subclusters are all sequentially divided into two clusters, and the subcluster stops clustering when the number of documents contained in the subcluster is less than or equal to 3. Finally, a binary clustering tree is formed, where the leaf nodes are 15 }, as shown in Figure 2. en, the algorithm traverses the leaf nodes of the binary clustering tree in the middle order and then the atom cluster list L � {C 1 , C 2 , C 3 , C 4 , C 5 , C 6 } is generated. α-Filtering Tree Definition 3. α-Filtering Tree. A node u in the α-filtering tree is a triple, which is denoted as where u·FV is a n-dimensional filtering vector, u·PL is a child node pointer which have at most α pointers, and u·DC stores documents when u is a leaf node. (1) If u is a leaf node, then u·PL � ∅, We give the construction procedures of the α-filtering tree in Algorithm 4. Algorithm 4 builds the α-filtering tree with the atom cluster list. Tree nodes are created during each round processing of steps 8-21. e original atom cluster list is treated as the first child node list (CNL). In each round, α nodes are fetched from CNL once a time and a parent node is created to have these nodes and added into the parent node list (PNL). After all the nodes in CNL have been fetched and then we have the complete parent node list (PNL) in this round. If we have more than 1 node in PNL, then we move all nodes in PNL to CNL. Otherwise, the only node in PNL is the root of the generated index tree. Theorem 2. e height of an α-filtering tree with t leaf node is ⌈log α t⌉ + 1. Proof. We assume that the length of the atom cluster list L is t, that is, the number of leaf nodes of the α-filtering tree is t. According to Algorithm 4, after the 1 st , 2 nd , . . ., x th rounds of processing, the number of current generated parent nodes becomes ⌈t/α⌉, ⌈t/α 2 ⌉, . . . , ⌈t/α x ⌉. When the number of current generated parent nodes is 1, the construction of the α-filtering tree is finished, so there is ⌈t/α x ⌉ � 1. en, we deduce x � ⌈log α t⌉. Since the height of each tree is increased by 1 for each merge and the initial height of the tree is 1, the height of the α-filtering tree with t leaf node is ⌈log α t⌉ + 1. Definition 4. For a query Q whose vector is V Q and two nodes u and u', if Score (V Q , u·FV) ≥ Score (V Q , u'·FV), then u has higher or equal relevance score with Q than u' which is denoted as Theorem 3. We assume that u � < FV, PL, DC > is a nonleaf node in the α-filtering tree and u·PL stores g child nodes, i.e., u·PL � {u·PL [1], u·PL [2], . . ., u·PL[g]} and 1 ≤ g ≤ α. For a query Q, we have Proof. To prove ∀u ′ ∈ u · PL ⟶ u ▷ u ′ , that is to prove Score(V Q , u · FV) ≥ max Score(V Q , u · PL[1]· FV), Score(V Q , u · PL[2]· FV), . . . , Score(V Q , u · PL[g] · FV)}. Because every elements in an n-dimensional filtering vector u·FV are generated by the following equation: Security and Communication Networks Input: e document set, D; e threshold of the maximum number of documents in an atom cluster, μ; Output: An atom cluster list, L; (1) Create a root cluster node r which has all documents of D; (2) GenBiSectingTree (r, μ); us, Score (V Q , u·FV) is not less than the relevance scores between any child nodes' filtering vector and the query vector. en we have, ∀u ′ ∈ u · PL ⟶ u ▷ u ′ . During the search process, for a given Q, if the relevance score between a subtree's root node filtering vector and the corresponding query vector is not higher than the threshold of the candidate result list, then all its child nodes are noncandidates according to eorem 3. us, we can directly ignore this subtree and the search efficiency is improved, which is the pruning criterion of greedy depth first search algorithm. Adopting the idea, we propose a greedy depth first search algorithm shown in Algorithm 5. In Figure 3, we construct a 3-filtering index tree example to further illustrate multi-keyword ranked search algorithm. e index tree is built any child nodes' filtering vector and the query vector after the leaf nodes are generated from the atom cluster list. e intermediate nodes are generated based on the leaf nodes. We assume that the query vector is V Q � (0.5, 0.5, 0, 0) and the top-3 ranked documents are interested. When the search starts, the algorithm first visits the left subtree of u 11 , u 21 , and u 31 recursively and finds that u 31 is a leaf node which has 3 documents. e algorithm puts all the documents into the result list RL � {d 1 , d 2 , d 3 }, where the relevance scores are 0.3, 0.35, and 0.3, respectively. en accesses u 32 is accessed, and the relevance score between its filtering vector and the query vector is 0.2 which is less than 0.3; therefore, RL remains unchanged. After that u 33 is accessed with the relevance score 0.35, so d 9 and d 10 are added to RL, replacing d 1 and d 3 . Finally, the algorithm searches the subtree rooted by u 22 and finds no need to search the remaining subtree. e search algorithm is finished. Effective and Secured Multi-Keyword Ranked Search Scheme In this section, we construct the secure search scheme by using the secure kNN algorithm [41]. e data owner constructs the index tree with the document set and then uses the secure keys to encrypt the document sets and index tree, respectively. e data user submits search request to the cloud server by using query keywords. e cloud server performs search algorithm on the index tree and returns the search result documents. Security and Communication Networks documents and index tree. Here, g is the secure symmetric encryption key for document encryption and is only shared with the data user but protected from cloud server. S is a bit vector for vector splitting, and each dimension of S is randomly chosen to be 0 or 1 and the number of 0 and 1 should be nearly equal. M 1 and M 2 are both n × n-dimensional randomly generated invertible matrices. BuildIndex (D, SK). e data owner first performs index tree construction algorithms discussed in Sections 5.1 and 5.2 to generate the plaintext index tree I on the documents in D. en, the data owner encrypts the index tree to its encrypted from I. Specifically, for each document vector and each node's filtering vector, we use the bit vector S to split them into two vectors. For simplicity, we use V to represent one of these vectors, and the splitting procedures are as follows: en the data owner encrypted the split vectors to After that, the data owner encrypts the documents in each leaf node's atom cluster by secure key g, and the encrypted index tree I is generated. Finally, the data owner outsources I to the cloud server. Input: e root node of an α-filtering tree, r; e query vector of Q, V Q ; e number of requested documents, k; e minimum of the relevance scores between documents in RL and Q, λ; e list for storing top-k ranked documents, RL; Output: RL; Atom cluster Document (1) u � r; (2) if u is a leaf node then (3) Add all the documents of u·DC in RL; (4) if |RL| > k then (5) Set the threshold λ equals the minimum of the relevance scores between the candidate documents in RL and V Q ; (6) Remove the documents from RL, the relevance scores between which and V Q are smaller than λ; (7) end if (8) else (9) if Score (V Q , u·FV) > λ then (10) for each u' in u·PL do (11) SearchIndex (u', V Q , k, λ, RL); (12) end for (13) end if (14) end if ALGORITHM 5: SearchIndex (r, V Q , k, λ, RL). 8 Security and Communication Networks GenTapdoor (Q, SK). e data user generates the query vector V Q according to the query keywords in Q. en the secure key SK is adopted to generate the corresponding trapdoor T Q . e generation of T Q is similar to the encryption procedures of document vectors. First, V Q is split into two vectors according to the following equation en, the data user encrypted the split vectors into the trapdoor Finally, T Q is submitted to the cloud server as the search command. SearchIndex (I, T Q , k). e cloud server receives the trapdoor T Q , and performs search algorithm on the secure index tree I. en, the cloud server returns the encrypted top-k documents result list RL to the data user who decrypts the encrypted documents and the search processing is finished. e special matrix encryption can obtain the inner product of two vectors only with the inner product result of their encryption forms, which is illustrated as follows: To protect the Trapdoor unlinkability and keyword privacy under known background model, we should prevent the server from calculating the exact value of the relevance score between the T Q and F V which can leak TF distribution information. us, we add some phantom terms [11] on the vectors generated in our scheme to disturb the relevance score calculation. But the search accuracy would decrease. In the enhanced scheme, we generate (n + n') × (n + n')dimensional secure matrices and also the document vectors will be extended to n dimensions. e extended elements F V [n + i] are set to a random number β. Similarly, the query vector is also extended to be a n + n' dimensional vector, and the extended elements are random set to 1 or 0. us, the relevance score between the query trapdoor and document vector is equal to F V · V Q + β i , where V Q [n + i] � 1. e randomness of β i can ensure the privacy against the known background model. Security Analysis. In this paper, we construct the treebased secure search scheme same as [11,42] to achieve searchable encryption, which represents the security of our scheme should be the same as [11,42]. We give the proof briefly as follows: (i) Document privacy: we use the traditional symmetric encryption on documents before outsourcing to the cloud server. As long as the secure key is secured against the adversary, the document privacy is protected in our scheme. (ii) Index and trapdoor privacy: the document vectors and query vectors store the TF and IDF value of the corresponding keywords and encrypted with the secure matrices generated by secure kNN after being randomly split. e secure matrices are both randomly generated invertible matrices. e adversary cannot calculate the secure matrices only with the encrypted vectors. erefore, the index and trapdoor privacy is protected in our scheme. (iii) Trapdoor unlinkability: the query trapdoor is randomly split by the split vector S for each search, and the trapdoors are different with same search requests. us, the trapdoor unlinkability is guaranteed. But, the cloud server can link the same search requests by inferring the access pattern and the ranked result of the searches. To solve this problem, we can expand the vectors used in our secure scheme by adding phantom dimension to interference the relevance score. With phantom terms, the search results in same requests could be different. However, the search accuracy can be decreased and the balance between the privacy and accuracy is discussed in [11]. (iv) Keyword privacy: the index and trapdoor privacy is protected in our scheme which means keyword privacy is also protected in the known cyphertext model. In the known background model, the relevance score between the documents and the query vector can leak the TF information about the query keywords. If a search request only has one search keywords or one of the search keywords has high TF value, the cloud server can easily infer this keyword by its statistical information about TF distribution of keywords. Similarly, to solve this problem, we add phantom terms to obfuscate the relevance score between the query trapdoor and the document vector. at is to say, the TF-IDF value is variable with different search requests. us, the cloud server cannot link the keywords with their TF distribution, and the keyword privacy is enhanced. Performance Analysis We evaluate the performance of our α-filtering index tree scheme in this section and compare it with Xia et al.'s index tree scheme [11] and Zhu et al.'s HAC-tree [42] under different settings. We use a real-world dataset which has 120000 documents in total and implement our scheme using Java in Windows 10 with an Intel Core i5-6200U @ 2.30 GHz Security and Communication Networks 9 CPU, and the default parameter setting is shown in Table 1. k, μ, |Q|, α, and m are number of required documents, document threshold in each atom cluster, number of search keywords, number of α, and number of documents, respectively. In the enhanced scheme, we add phantom terms to enhance the security of our scheme. e search accuracy and efficiency of these two schemes are the same without the phantom terms. We only perform evaluation on original scheme for simplicity. e influence of phantom terms is discussed in [11]. Space Usage Evaluation. In this section, we conduct the space analysis of the different schemes from the aspect of the index tree. We only discuss the index tree space usage; therefore, the search parameters are not changed. e space usage of Xia is the same as Zhu because they both are binary tree with same number of nodes. Space Usage versus μ. We change the document threshold μ in each cluster to compare the space usage of three schemes. Figures 4(a) and 4(b) shows the index tree space cost when the number of documents is 20000 and 120000, respectively. e result shows that as the scale of the document set increases, the space usage of index tree is significantly increased. e reason is that more tree nodes are added to the index tree to store more documents. e result also shows that larger threshold can save the space usage of the index tree which will reduce the nodes in the α-filtering tree. Space Usage versus α. We change α of the α-filtering tree to compare the space usage of three schemes. e result shows that an appropriate setting of α can largely save the space usage of the α-filtering tree. But when α is too large, the index tree will save space usage with more nodes having the same parent node and tends to be stable. Index Building Time Cost Evaluation. In this section, we evaluate the time cost of the index building. We measure the time cost of the BuildIndex algorithm of our scheme, which is shown in Table 2, given m � 20000. e BuildIndex algorithm in our scheme takes hours, while in Xia's scheme, it takes seconds. It should be noted that the key extraction and TF-IDF calculation are the same in all three schemes. And, the tree construction algorithm also has almost the same time cost because the basic structure of the tree is the same. e main difference of the time cost is that our tree uses the clustering algorithm to further improve the search efficiency in search algorithm. e clustering algorithm can consume a lot of time, which leads to worse index building time cost. But, it can be improved by adopting more efficient clustering algorithms such as distributed clustering algorithm. e longer time cost for index building is affordable because it only needs to be performed once while providing more efficient searches. Search Time Cost Evaluation. In this section, we evaluate the time cost of the search efficiency of different schemes. Each data point in the figure is at least performed 10 times. (1) Time cost versus μ. Figure 6 indicates that our scheme is better than the existing schemes in the search process. e α-filtering tree can improve the search process by accelerating the process of finding the leaf nodes, shortening the height of the tree, and accessing more nodes by an intermediate node. e k-means cluster can gather similar documents closely in leaf nodes which can fill the candidate result list reasonably. But when u increases, the time cost of our scheme tends to increase simultaneously; the reason is that the number of documents in a leaf node is increased which will slow down the relevance calculation process in the leaf node. e Xia's and Zhu's trees increase largely when the scale of document set increases; the reason is that their schemes are both binary tree in which the height of the tree increases larger than the α-filtering tree in our scheme. (2) Time cost versus α. Figure 7 indicates that the time cost of our scheme is lower than the existing schemes. As mentioned above, appropriate setting of α can improve the performance of our scheme. But when α is too large, the pruning function in an intermediate node will require more calculation and the pruning effect could be worse for there are fewer subtrees to be pruned. (3) Time cost versus |Q|. Figure 8 shows that the number of search keywords will slow down the search process of tree based index scheme. But overall, our scheme outperforms other schemes by the contribution of the α-filtering tree. (4) Time cost versus k. Figure 9 shows that under different setting of k, the time cost of our scheme is better than Xia and Zhu. When k increases, the time cost of tree-based schemes increase slightly. e reason is that the pruning function in tree index can save the times of calculation between documents and query vector. e setting of α. e experiment shows that different settings of α result in different improvements in our scheme. But it is hard to find an appropriate α for a tree with m nodes. e space usage of α-filtering tree decreases as α increases. However, the search time cost can increase as α increases, and it is worst when α � m. When search algorithm iterates every node in the tree, and the filtering vector in the only non-leaf node cannot help to filter the noncandidate nodes. e best α-filtering tree should balance between the width and depth. An α-filtering tree should at least have a depth of three to have the filtering vectors work. e B+ tree [43] is a multibranch tree widely adopted for storing index for large data, and arguably degree of a B+ tree is usually set to the result of the block size divided by the key size [43] in real circumstances when it stores index for a much larger dataset than that we used in experiment. ese settings can help to define an appropriate α. e search time complexity of the α-filtering tree is O(log α m), which means that the tree with less depth has better search efficiency. However, the filtering vector in a shorter tree will filter fewer nodes than a tall tree with more filtering vectors, which results in worse search efficiency. us, it is hard to define the best setting of α when given different m and it needs further discussion. Conclusion In terms of the efficiency problem of privacy-preserving multi-keyword ranked search, we propose an α-filtering tree index search scheme based on bisecting k-means clusters. e scheme utilizes the characteristics of a multibranch tree, which greatly reduces the spatial complexity of the index tree. At the same time, the idea of clustering is used to store the related documents closely in the index tree, which greatly improves the pruning algorithm on the index tree, thus improving the search efficiency. In contrast, since the index tree nodes are stored in the form of clusters and the clustering of the bisecting k-means requires a large amount of time, the variability of the index tree could be limited. e experiment results on the real-world dataset show that, to a certain extent, our scheme can greatly improve the search efficiency of privacy-preserving multi-keywords ranked search and at the same time guarantee the accuracy of the search results. Data Availability e text data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-01-30T09:04:59.449Z
2020-01-23T00:00:00.000
{ "year": 2020, "sha1": "583c9e1aa83955a9312b6bcb305020e218579a95", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/scn/2020/7307315.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a2ac9951f05543d0358a996da546172205ee043c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
125859166
pes2o/s2orc
v3-fos-license
Quantum geometrodynamics of Einstein and conformal (Weyl-squared) gravity We discuss the canonical quantization of general relativity and Weyl-squared gravity. We present the classical and quantum constraints and discuss their similarities and differences. We perform a semiclassical expansion and discuss the emergence of time for the two theories. While in the first case semiclassical time has a scale and a shape part, in the second case it only has a shape part. Introduction The quantization of gravity is perhaps the most important open problem in theoretical physics [1]. Among the oldest approaches is quantum geometrodynamics ("Wheeler-DeWitt equation"). This is still a promising approach because it is very conservative: its equations are found if one searches for quantum wave equations that directly lead to Einstein's equations in the semiclassical (WKB) limit [2]. In 1926, Erwin Schrödinger found his wave equation by formulating classical mechanics in Hamilton-Jacobi form and "guessing" a wave equation that leads to this form in the limit of geometric optics. We recall what he wrote in 1926 [3]: We know today, in fact, that our classical mechanics fails for very small dimensions of the path and for very great curvatures. Perhaps this failure is in strict analogy with the failure of geometrical optics . . . that becomes evident as soon as the obstacles or apertures are no longer large compared with the real, finite, wavelength. . . . Then it becomes a question of searching for an undulatory mechanics, and the most obvious way is by an elaboration of the Hamiltonian analogy on the lines of undulatory optics. 1 The same procedure can be applied to Einstein's equations [2]. If written into Hamilton-Jacobi form, it can be "translated" into the Wheeler-DeWitt equation and the diffeomorphism (momentum) constraints. These equations are timeless, that is, there is no external time parameter or external spacetime background. One can introduce an intrinsic time by the structure of the Wheeler-DeWitt equation whose kinetic term is locally hyperbolic; this intrinsic time is related to √ h, where h is the determinant of the three-metric h ab . An extrinsic time is sometimes used in the alternative approach of reduced quantization; a typical choice is York's time (see e.g. [1], p. 148), which is related to the trace K of the extrinsic curvature. Starting with quantum geometrodynamics as the fundamental framework, one can derive the limit of quantum (field) theory in an external background spacetime by a Born-Oppenheimer type of approximation scheme with respect to the Planck mass [1,2]. At leading order, the semiclassical or "WKB time" is recovered as an emergence concept from the timeless quantum equations. In this contribution, we want to shed more light on the nature of emergent time. We will study the quantum geometrodynamics corresponding to general relativity (GR) and the quantum geometrodynamics corresponding to conformal (Weyl-squared) gravity, briefly called Weyl theory. The latter serves as a model for a gravitational theory devoid of scale. We will discuss the common features of and the differences between those theories. For this purpose, we will employ a unimodular decomposition of the three-metric. In quantum GR (section 2), we can identify the scale and the shape part of the WKB time; the scale part is related to the intrinsic time in the vacuum Wheeler-DeWitt equation. So the WKB time needs for its "passing" both scale and shape. In section 3, we present a model of quantum geometrodynamics based on Weyl-squared gravity. We find there equations similar to the equations of quantum GR, but with a different explicit structure. In section 4, we discuss the semiclassical limit of these equations, which gives rise to a "Weyl-WKB time" (WWKB time). It has the property that the evolution of the scale degree of freedom does not contribute to it; this is because the WWKB time inherits the conformal symmetry of the Weyl-squared action from which the intrinsic time is absent. WWKB time is thus scale-less and conformally invariant; we call it the shape time. In section 5, we present our conclusions. Further details and references can be found in our papers [4,5,6]. Canonical quantum general relativity Instead of the usual canonical formulation of GR and its quantization, we formulate the quantum geometrodynamics here in variables which manifestly reveal the conformal behaviour of the theory. We are interested, in particular, in identifying the three-metric volume √ h and the trace of the extrinsic curvature K as canonical variables carrying a single degree of freedom. This allows us to keep track of "intrinsic time" and "extrinsic time" by investigating what happens to these canonical degrees of freedom. These variables are called unimodular-conformal variables and are discussed in detail in [4,5,6]. To formulate GR in these variables, we will briefly review here section 2 from [7]. The extrinsic curvature and its trace are given by where D i is the covariant derivative with respect to the three-metric. We decompose these as well as the metric variables into scale and shape part, called unimodular-conformal decomposition: where κ = 8πG (c = 1), and we keep in mind that (3) R can also be decomposed if needed (see the Appendix in [5]). Since the configuration variables are a andh ij , the conjugate momenta will be p a and p ij , respectively, Using the definitions (2)-(4) and recalling that the ADM momentum is given by (6) can be related to the trace and traceless part of the ADM momentum, respectively, by so that p ij Performing a Legendre transformation, we see that the total Hamiltonian is simply a linear combination of first class constraints, The momenta with respect to the lapse densityN and shift N i are the primary constraints pN ≈ 0 and p i ≈ 0, respectively. They give rise to secondary constraints: the Hamiltonian constraint H E ⊥ ≈ 0 and the momentum constraints H E i ≈ 0, which in unimodular-conformal variables have the following form: For completeness, we have added the cosmological constant Λ. It is important to notice that the constraints explicitly depend on the scale density, which is related to intrinsic time √ h, recall (2). It is this and only this variable that is responsible for the conformal non-invariance of GR [7]. Canonical quantization now proceeds by promoting the canonical variables into operators acting on wave functionals Ψ h ab (x), a(x) : From the classical Poisson brackets, one gets the standard commutators. The constraints are then promoted to operators annihilating the wave functional, leading to the Wheeler-DeWitt equation and the momentum constraints, respectively, 8th We have added the parts of the Hamiltonian and momentum constraintsĤ m ⊥ andĤ m a coming from quantized matter (non-gravitational fields), so that the wave functional now depends on matter fields, too, Ψ ≡ Ψ h ab , a, φ . It is evident from (13) that the Wheeler-DeWitt equation has an indefinite kinetic term, with a playing the role of "intrinsic time". This intrinsic time is, however, constructed from the three-metric, so the Wheeler-DeWitt equation is timeless in the sense of absence of spacetime. That spacetime has disappeared is in full analogy to the absence of classical trajectories in quantum mechanics. The absence of spacetime is symbolized in Figure 1 by the absence of the pointer in the clock. (But the scale is present, waiting to be measured in the semiclassical limit.) The notion of spacetime can be recovered from full canonical quantum GR by a Born-Oppenheimer type of expansion with respect to the Planck-mass squared, m 2 P = /G. In this limit, we obtain the well known theories of quantum field theory in curved spacetime (see e.g. [1] for all technical details). We write the total wave functional in the form where φ stands for 'matter field', and perform an expansion of S (which is a complex function) with respect to the Planck-mass squared, We insert this expansion into the Wheeler-DeWitt equation and compare different orders of m 2 P . At highest order, m 4 P , we find that S E 0 is independent of φ, At order m 2 P , one obtains an equation for S E 0 : Rewriting this in terms of the original variables and using the chain rule for the functional derivative with respect to the metric, for the Hamilton-Jacobi functional is the inverse of the DeWitt supermetric [1]. Since this equation is equivalent to all Einstein equations [8], one has recovered classical GR at this order of semicalssical expansion. At the next order, m 0 P , an equation for S E 1 is obtained that can be simplified by writing and demanding for D the standard WKB prefactor equation to hold [1]. After some manipulations, one arrives at the following equation for ψ (1) : The form of this quation reminds one of the Tomonaga-Schwinger equation for ψ (1) . Using again (18), the left-hand side of this equation has been rewritten in terms of the original variables (middle expression in (21)), which is well known from the literature [1]. To make the Tomonaga-Schwinger form explicit, we define thus introducing a local "bubble" (Tomonaga-Schwinger) time functional τ (x). We can thus write (21) in the form also known as Tomonaga-Schwinger equation. Note thatĤ m ⊥ in (23) differs fromĤ m ⊥ in (22) by a factor of a, which also induces the same relative rescaling of the bubble time. This is just a consequence of using unimodular-conformal variables. It is of interest to have a closer look at the bubble time in unimodular-conformal variables, We see from this that there are two main components, a derivative along the direction (in functional configuration space) of the scale density a and a component along the five directions of the conformal (shape) parth ij . The scale part is the part that corresponds to intrinsic time in the full Wheeler-DeWitt equation (13). We will call the two independent contributions to the bubble time as "scale time" and "shape time", respectively. Figure 2 symbolizes the emergence of time (the pointer) in the semiclassical approximation. While the picture shows only scale time, the full semiclassical time contains, of course, also the shape part. We finally note that τ is not a scalar function, because this would be in contradiction to the commutator of the matter Hamiltonian densities at different space point [9]. A functional Schrödinger equation can be obtained from the Tomonaga-Schwinger equation after choosing a particular foliation and integrating over space. The next order of the Born-Oppenheimer expansion, m −2 P , leads to genuine quantum gravitational effects; for example, corrections to the power spectrum of the CMB anisotropies [10]. where C μνλρ is the Weyl tensor. This action is invariant under the Weyl transformations We have written this action in a form that is appropriate for discussing the semiclassical approximation of its quantum version: α W is a dimensionless constant, and it is the quantity S W / that is relevant for the semiclassical expansion. We do not expect the Weyl theory to be a classical alternative to GR, but it may be of relevance in the early universe and at the Planck scale, where scales may become irrelevant [11]. We note that (25) defines a theory with higher derivatives, which demands an enlargement of the configuration space (see [5] for details and references to earlier work). Here, we have to add the extrinsic curvature K ab as an additional independent canonical variable besides the three-metric h ab . This can be accomplished by adding a constraint to the original Lagrangian which takes care of the definition (1). In unimodular-conformal variables, the new Lagrangian becomes [5] where "T" stands for the traceless part of the corresponding quantity. For the canonical momenta, we find from (27): Note that the momentap ij andP ij are traceless. The quantityC T ab is the traceless part of the "electric" part of the Weyl tensor [5]. The constraint analysis proceeds in the standard way. In addition to the primary constraints with respect to lapse and shift (as known from GR), we have here an additional primary constraint given bȳ The presence of this constraint suggests thatK is arbitrary, in the same manner as pN ≈ 0 and p i ≈ 0 suggest thatN and N i are arbitrary. The secondary constraints follow from the conservation of the primary constraints and read (attention is here restricted to the vacuum case) A combination of the secondary constraint Q W and the primary constraintP generates conformal transformations (of a andK) [12,5]. Due to the enlargement of the configuration space, the structure of these constraints differs from those of GR, see (9) and (10). We note, in particular, that the intrinsic time in (9), which is related to the determinant of the three-metric, is absent here because the scale density is arbitrary due to conformal invariance. This will later be connected with the emergence of pure shape time in the semiclassical limit. Canonical quantization now proceeds in a manner analogously to (11) and (12), taking into account the enlarged configuration space. Adding a (conformally coupled) matter part symbolized by φ, we arrive at a quantum wave functional Ψ[h ij , a,K T ij ,K, φ], which is subject to the following constraints: The first equation is the "Weyl-Wheeler-DeWitt equation" (WWDW equation) and reads The new quantum momentum constraints are given by As for the classical constraints, the WWDW equation is structurally different from the Wheeler-DeWitt equation (13). Its dynamics is not only determined by the three-metric, but also by the extrinsic curvature (which classically corresponds to the time evolution of the three-metric). But there is no scale a and thus no intrinsic time. This is symbolized in Figure 3. We also note that in the vacuum case drops out of these equations. The third and fourth of the above constraints show that the wave functional does not depend on a and K, δΨ and is thus conformally invariant (apart possibly from a phase). This is the most important difference to GR, especially in the light of the semiclassical approximation which we will discuss in the next section. . Recovery of (shape) time in the semiclassical limit In this section, we will discuss the semiclassical approximation scheme for the WWDW equation [4]. While in quantum GR (section 2) m −2 P was used as the expansion parameter, this role will be played here by α −1 W . Let us consider Weyl gravity with a conformally coupled matter field φ. We restrict ourselves to this case (massless vector or conformally coupled scalar field) because the constraints will remain first class and a andK will remain arbitrary. The general case is discussed in [6]. Writing and performing an expansion of S with respect to the (dimensionless) variable α −1 W , we find the following equations at consecutive orders of α W : • α 1 W : This gives the Weyl-Hamilton-Jacobi equation for S W 0 . We first find Figure 5. Emergence of full WKB time from a scale-less quantum theory of gravity? Conclusion Let us briefly summarize our main points as follows: • The canonical quantization of Weyl gravity can be performed analogously to GR, but the structure of the configuration space is different: it consists of both the three-metric and the extrinsic curvature. • We have performed a consistent decomposition of variables into scale part and conformally invariant part. The constraints can be rewritten in terms of the conformally invariant parts. • We have displayed and discussed the Weyl-Wheeler-DeWitt equation and the Weyl diffeomorphism constraints. There are also new constraints which show that the wave functional is conformally invariant. • We have discussed the semiclassical expansion and the recovery of time for both quantum GR and the quantum Weyl theory. Whereas in the former we get both a scale and a shape time, the latter only leads to a shape time.
2019-04-22T13:07:39.539Z
2017-08-18T00:00:00.000
{ "year": 2017, "sha1": "96adcf6f272f157fd8bd00756be3fbf3da46c92e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/880/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1b8a6a001bdbf4ae3a198ba39cdf35c32df4377c", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
119206291
pes2o/s2orc
v3-fos-license
Three-qubit topological phase on entangled photon pairs We propose an experiment to observe the topological phases associated with cyclic evolutions, generated by local SU(2) operations, on three-qubit entangled states prepared on different degrees of freedom of entangled photon pairs. The topological phases reveal the nontrivial topological structure of the local SU(2) orbits. We describe how to prepare states showing different topological phases, and discuss their relation to entanglement. In particular, the presence of a $\pi/2$ phase shift is a signature of genuine tripartite entanglement in the sense that it does not exist for two-qubit systems. We propose an experiment to observe the topological phases associated with cyclic evolutions, generated by local SU (2) operations, on three-qubit entangled states prepared on different degrees of freedom of entangled photon pairs. The topological phases reveal the nontrivial topological structure of the local SU(2) orbits. We describe how to prepare states showing different topological phases, and discuss their relation to entanglement. In particular, the presence of a π/2 phase shift is a signature of genuine tripartite entanglement in the sense that it does not exist for two-qubit systems. I. INTRODUCTION Topological phases of quantum systems that evolve in topologically nontrivial spaces, have attracted considerable attention in a wide variety of subdisciplines in modern physics. Perhaps the most well-known example of such a topological quantity is the Aharonov-Bohm phase acquired by a charged particle that encircles a shielded magnetic flux line [1]. This phase depends only on the winding number of the particle's path around the impenetrable region of magnetic flux, but it is insensitive to perturbations of the path. A topological phase acquired by a pair of entangled qubits undergoing cyclic local unitary evolution has been discovered [2]. The topological interpretation of this phase relies on the relation between two-qubit states and the rotation group SO(3) [3]. This is perhaps most clearly seen in the case of the maximally entangled states. These states are in one-to-one correspondence with the points of real projective space S 3 /Z 2 ∼ SO(3) [4,5]. The two possible topological phases 0 and π can be associated with the two homotopy classes of loops in SO (3). In other words the accumulated phase is not affected by continuous deformations of path of the cyclic evolution. The topological two-qubit phase has been observed in spinorbit transformations on a laser beam [6] and in a nuclear magnetic resonance setting [7]. The notion of topological phase has been extended to pairs of entangled higher-dimensional quantum systems [8]. These phases are integer multiples of 2π/d, where d is the Hilbert space dimension of each subsystem. Thus, for such objects, fractional values may occur. The topological phase for a given cyclic local SU(d) evolution of a state |ψ = d k,l=1 α kl |kl is restricted by the invariance of the determinant det α kl of the coefficient matrix. Since det α kl = 0 for product states, the topological phase is only well-defined in the presence of entanglement. Recently, the notion of topological phase has been extended to N -qubit systems [9]. These multi-qubit phases may take fractional values for N ≥ 3. The number of possible values increases rapidly with the number of qubits. All possible values for up to N = 7 have been found using a combinatorial algorithm [9]. Furthermore, a relation between the topological phases and the degree of nonzero polynomial entanglement invariants has been conjectured [9]. As an example of such a relation, the possible topological phases 0, π/2, π, and 3π/2 for N = 3 can be linked to multipartite entanglement in the sense that three-tangle is a polynomial invariant of degree n = 4, namely the hyperdeterminant in the coefficient matrix α klm [10]. This implies that the allowed topological phases are indeed restricted to integer multiples of 2π/n = π/2. In order to realize a multiple qubit system in a photonic device, one may combine different degrees of freedom that can be manipulated independently. Numerous experiments have employed polarization and orbital angular momentum (OAM) to implement controlled operations [11][12][13][14][15] and spin-orbit Bell inequality [16,17]. Here, we propose an experiment to measure the topological phases for N = 3, in qubits encoded on photon pairs produced by spontaneous parametric down conversion (SPDC). Each photon carries a polarization and orbital degree of freedom. The three qubits are encoded in the orbital part of the signal photon and the two polarizations, by projecting the orbital part of the idler photon on a well-defined Laguerre-Gaussian mode. In this way, we demonstrate different three-qubit states that acquire the different three-qubit phases by employing local SU (2) transformations in Franson loop interferometers on each photon. The observed phases would be a signature of the local orbits and thereby a non-trivial signature of multipartite entanglement. The outline of the paper is as follows. The theory of topological three-qubit phases arising in local SU(2) evolution is described in Sec. II. Sections III-V contain the experimental setup, where the generation of three different types of three-qubit states are described in Sec. III, the measurement of topological phases is described in Sec. IV, and examples of evolutions that reveal the topological phases is given in Sec. V. The paper ends with the conclusions. II. THREE-QUBIT TOPOLOGICAL PHASE STRUCTURE When considering interconvertibility of three-qubit states under stochastic local operations and classical communication (SLOCC), the genuinely tripartite entangled states fall into two classes [18]. These classes are termed the GHZ-class and the W-class after their representatives, the GHZ state and the W state. By considering interconvertibility under local unitary transformations the two SLOCC-classes can be further divided into local unitary classes, or in other words, orbits of the group of local unitary transformations. The structure of such an orbit constitutes a qualitative description of the entanglement of the states belonging to the orbit. This is the most detailed description of the entanglement properties that can be given [19]. Since the action of the U (1) group is a trivial global phase shift, it is sufficient to consider the local SU(2)-orbits to study entanglement properties. The structure of the SU(2)-orbits of entangled threequbit states has been studied by Carteret and Sudbery in Ref. [20]. In particular it was shown that the local SU(2)orbit of a GHZ state |ψ ghz = 1 where |+ and |− are orthogonal states, is quadruply connected. The four different homotopy classes of cyclic evolutions correspond to the four different accumulated phases, 0, π 2 , π and 3π 2 . Since these are the only phases allowed for a state with nonzero three-tangle it follows that the quadruple connectedness is related to the tripartite entanglement measured by the three-tangle. A π 2 phase shift cannot be generated in a two-qubit system and is therefore a measurable quantity that indicates the presence of tri-partite entanglement. Four different topological phases is in fact not the most common topological phase structure for local SU(2)orbits belonging to the GHZ SLOCC-class. Using the canonical form of three-qubit states of Carteret et al. [21], it can be seen that the set of local SU(2) orbits that exhibit four topological phases forms a subset of the local SU(2) orbits of the GHZ SLOCC-class parameterized by four real parameters, while the full set of local SU(2) orbits is parameterized by four real and one complex parameter. The states of this subset can, up to local SU(2) operations, be written on the form where a, b, c, d ∈ C\{0} such that |a| 2 + |b| 2 + |c| 2 + |d| 2 = 1. We will refer to this class of states as the X-class. A distinguished member of this class is the three-qubit state |ψ X for which a = b = c = d = 1 2 termed the three-qubit X state in Ref. [22]. This state is maximally entangled in the sense that all reduced density operators for the individual qubits are proportional to the identity. Note that the X state can be brought to the GHZ state by application of a Hadamard transformation on each qubit. Hence, these two states are entangled in exactly the same way. The states in the GHZ SLOCC-class that do not fall in the X-class have only two different topological phases. Since the X-class is a lower dimensional subset, a generic state in the GHZ SLOCC-class is of this kind. An example of such a state, with a doubly connected local SU(2) orbit, is a biased GHZ state |ψ bghz = α|+++ +β|−−− , where |α| = |β| and |α| 2 + |β| 2 = 1 [20]. The two homotopy classes of cyclic evolutions correspond to the accumulated phases 0 and π. The X state and a biased GHZ state thus represents the two different topological phase structures present in the GHZ SLOCC-class. The remaining three-qubit states with genuine tripartite entanglement belong to the W SLOCC-class, and have either the topological phases 0 and π, or no topological phases at all. We would thus not see any other sets of topological phases by studying states in the W class. This paper is concerned with three-qubit systems encoded in the polarization and orbital angular momentum (OAM) states of photons. We will be describing the polarization states in a basis of right and left circular polarization states |+ and |− or alternatively in a basis of horizontal and vertical polarization states |H and |V . The relation between these basis vectors is given by |± = 1 √ 2 (|H ± i|V ). The OAM states will be described in terms of a basis of Laguerre-Gaussian modes of first order LG 1,0 and LG −1,0 , denoted |+ and |− similarly to the circular polarization states, or in a basis of the Hermite-Gaussian first order modes HG 1,0 and HG 0,1 , denoted |h and |v similarly to the linear polarization states. The relation between these bases is given It is useful to note that the X state prepared in a basis of circular polarization states and Laguerre-Gaussian modes is the GHZ state, up to a relative phase factor −i of the two terms, in a basis of horizontal and vertical polarization states and Hermite-Gaussian modes. For example, if the X state in the {|+ , |− } basis has been encoded in the polarization and OAM states of a photon pair, such that the first and last qubit are encoded in polarization states and the middle in the OAM state of one of the photons, the same state in the {|H , |V } and {|h , |v } basis would be 1 We will consider the X state, the GHZ state, and a biased GHZ state in the {|+ , |− } basis since this allows us to implement cyclic local SU(2) evolutions that reveal the topological phases and lie completely within the set of operators that diagonalize in the {|+ , |− } basis. Considering the X state there are evolutions in each homotopy class that diagonalize in the {|+ , |− } basis, and thus allows all possible topological phases to be observed. This is true also for the biased GHZ state. III. QUANTUM STATE PREPARATION Our experimental proposal is based on the spontaneous parametric down conversion (SPDC) source of entangled photons first demonstrated in Ref. [23], and later used in other experiments [24,25]. There, two adjacent nonlinear crystals cut for type I phase match are spatially oriented with their optical axis mutually orthogonal. Starting from a linearly polarized laser, a quarter waveplate (QWP-p) can be used to produce a circularly polarized pump, and generate pairs of polarization entangled photons of the kind where the first term on the right hand side comes from the V component of the pump while the second one comes from the H component. In order to realize the three-qubit system, we may add the orbital angular momentum (OAM) quantum state of the photon pair [26]. As already demonstrated [27][28][29], the spatial correlations imposed by the phase match condition in parametric down conversion are manifested in the OAM transfer from the pump to the down converted photons, giving rise to an OAM entangled state of the form where m and l are the topological charges of signal and pump photons, respectively. Then, OAM conservation imposes that the added topological charge of signal and idler equals that of the pump, leading to a superposition of all components compatible with this condition. The probability amplitudes C m associated with a particular OAM partition is proportional to the spatial overlap between signal, idler, and pump transverse modes [30]. Now, the three-qubit realization can be achieved by pumping the SPDC source with a Laguerre-Gaussian mode with l = +1 and detecting the idler photon with a single mode fiber (SMF) that admits only the l − m = 0 component. Then, coincidence measurements should be obtained only for signal photons with m = +1. Therefore, the postselected spin-orbit quantum state is Since the subspace of first order paraxial modes have a qubit structure [31], we can now encode two qubits on the signal photons, namely their polarization and OAM, and a single qubit on the idler polarization. From now on, we shall omit the idler OAM since no operations other than detection filtering will be performed in this degree of freedom. Therefore, the initial three-qubit state generated is where we have grouped together the signal degrees of freedom. Now we shall discuss separately the two entangled three-qubit states of interest. We further show how to prepare certain product states that are used to investigate the role of entanglement in the topological phase measurements. A. X State First, we will see how to produce the three-qubit quantum state showing the π/2 topological phase. The proposed setup is sketched in Fig. 1. In order to simply understand the setup, it is useful to recall that the X state in the {|+ , |− } basis corresponds to a GHZ state in the {|H , |V } basis. Therefore, following the setup, we shall be seeking for this state. First, an astigmatic mode converter can be used to transform the signal LG mode to a horizontal first order HG mode [32], giving This state could also be produced by pumping the crystals with the first order Hermite-Gaussian mode h, still filtering the idler with the single mode fiber. In this case, the signal mode with optimal spatial overlap with pump and idler is also h. This would exempt the use of the mode converter, making the system alignment considerably easier. Then, a spin-orbit controlled NOT (CNOT) gate is used to flip the signal HG mode conditioned to its polarization. The CNOT gate is a Mach-Zehnder interferometer with input and output polarizing beam splitters (PBS). A Dove prism (DP) oriented at 45 o and inserted in the (V ) arm makes the transverse mode conversion |h → |v on this arm. After the CNOT gate the threequbit quantum state becomes the desired X state: B. Biased GHZ state In order to produce the biased GHZ state showing only topological phase π, the setup shown in Fig. 2 can be used. First, a half waveplate (HWP-p) with a suitable orientation is placed on the pump laser to set its polarization to produce the partially entangled state so that the initial three qubit state will be +1 LG ψ X With the astigmatic mode converter removed, the transformation |+ → |− is performed in the (V ) arm of the CNOT gate, giving NLC Now, two quarter wave plates inserted on signal (QWP-s) and idler (QWP-i) paths, make the polarization transformations |H → |+ and |V → |− needed to produce the desired biased GHZ state C. Product states In order to investigate the role of entanglement in the topological phase measurements, it is important to compare the quantum states discussed above with product states that are equivalent to the X state and the biased GHZ state in what regards the single qubit probabilities. For example, the product state has the same probability distribution as the X state for each individual degree of freedom in both the {|H , |V } basis and the {|+ , |− } basis. This state is readily prepared by the setup shown in Fig. 3 when the pump polarization is set to V and the down converted photons are created at the product state |H + H . In the signal arm, the mode converter is then oriented to make the transformation |+ → (|h + |v )/ √ 2, and the H polarization passes unaffected through the CNOT gate. Then, two half-waveplates can be used to set signal (HWP-s) and idler (HWP-i) polarizations to (|H + |V )/ √ 2, thus producing |ψ prod . A product state with the same probability distributions as the biased GHZ state in the {|+ , |− } basis could be where α = α+β √ 2 and β = i α−β √ 2 . |ψ ′ prod can be produced in the same way as |ψ prod , but with suitable settings of the mode converter and the HWPs in order to provide the coefficients α and β. Both |ψ prod and |ψ ′ prod could also be produced by tailoring the pump mode in order to optimize the spatial overlap between pump, idler, and the desired signal mode, without the use of a mode converter on the signal arm. The role played by entanglement in the topological phase evolution can be investigated with two-photon interferometry, as we shall see in section V. The interference patterns produced by entangled states are clearly distinguished from those expected for product states. IV. TOPOLOGICAL PHASE MEASUREMENT Under local unitary operations, the quantum state of the three-qubit photon pairs evolve keeping their entanglement unaltered. The topological nature of the phase evolution is strongly dependent on entanglement, so that it is important to identify entanglement signatures on the state evolution. As in Ref. [6], signatures of entanglement can be found on interference patterns between the evolved and the initial state. Two-photon interference can be achieved with the well known Franson setup, where each photon from a quantum correlated pair is sent through two alternative paths, a long and a short one [33][34][35][36]. When the delay time between the short and the long paths is larger than the detection time window, the twophoton coincidence count exhibits interference patterns. Each coincidence count may result from both photons following either the short or the long paths. Photons going through different paths do not coincide. Moreover, each arm of an SPDC source has a considerably short coherence length, so that no single photon interference can occur. Then, the overlap between the evolved and the initial state appears as the fringe visibility when the three qubits are individually operated in one arm and left unchanged in the other, as sketched in Fig. 4. In order to simplify the experimental proposal, still being able to cover the topological structure of the threequbit local SU(2) orbits, we shall be dealing with diago- by the DHWPs and DDP in the single qubit Poincaré representation. Starting from a |+ state, a HWP with its fast axis oriented at angle θ makes the transformation |+ → |θ + π/4 → |− , where |θ represents a linear polarization state along a direction rotated by the angle θ with respect to the horizontal. Therefore, a sequence of two HWPs oriented at θ and θ + φ, respectively, makes the cycle |+ → |θ + π/4 → |− → |θ + φ + π/4 → |+ , which corresponds to path A → B → C → D → A in Fig. 5. Since this path is composed of two geodesic segments, enclosing a solid angle Ω = φ, a purely geometric phase φ/2 is acquired by single qubits initially prepared at |+ . Of course, the same solid would be enclosed by an initial state |− , but in opposite direction, giving a geometric phase −φ/2. Therefore, each degree of freedom follows the local SU (2) operation: given in the {|+ , |− } basis, so that the overall threequbit state will be transformed according to U (φ s ) ⊗ U (φ o )⊗U (φ i ) , where φ s , φ o , and φ i correspond to signal polarization, OAM, and idler polarization, respectively. Under these local operations, product and entangled states evolve differently in what regards the overlap between the initial and the evolved states: ψ|U (φ s ) ⊗ U (φ o )⊗U (φ i )|ψ . In order to access these differences and investigate the role played by entanglement and its relationship to the three-qubit topological structure, we must perform interferometric measurements where a proper background for comparison between product and entangled states can be established. A possible strategy is suggested in Fig. 4, where dynamical phases θ s and θ i are deliberately added in one arm of the interferometer. As these dynamical phases are continuously varied, the coincidence count exhibits an interference pattern that should evolve as the single qubit unitary operations are applied. In fact, the coincidence count is proportional to where C 0 is the coincidence offset, |ψ is one of the selected states discussed above, θ = θ s + θ i is the total dynamical phase added and Therefore, the absolute value of the overlap between the initial and the evolved states V is related to the fringe visibility, while the overlap phase Φ, i.e., the Pancharatnam relative phase [37,38], translates to a fringe displacement. For a cyclic evolution, the fringes should recover maximal visibility and exhibit the accumulated phase shift, which is of topological nature for entangled states. However, the role of entanglement must be captured from signatures on the evolution of the interference pattern, as the individual unitary operations are applied. We shall investigate these signatures numerically in the next section. V. NUMERICAL RESULTS To demonstrate the presence of the topological phases of the two different topological structures represented by the X state and the biased GHZ state we give examples of cyclic unitary evolutions in each homotopy class for both states. The evolution of the interference pattern for these entangled states, as the unitary evolution is gradually implemented, is compared to the evolution of the interference patterns of product states with the same local statistics in the {|+ , |− } basis. Since we consider unitary evolutions that are diagonal in the {|+ , |− } basis, no difference in the interference patterns between an entangled state and such a product state can be attributed to the local degrees of freedom. Any difference is thus due to entanglement. A. X state A set of cyclic evolutions of the X state resulting in a π 2 phase shift is generated by the unitary operators For such an evolution of |ψ X , the coincidence intensity C as a function of θ, φ s , φ o , and φ i is given by the expression One unitary operator of this kind is U X1 (t) given by φ s (t) = φ o (t) = φ i (t) = −πt/T . If the X state is evolved by U X1 (t) the coincidence intensity as a function of t and θ is given by We can see that there is a reappearance of maximal fringe visibility for t T = 1 with the expected fringe shift π 2 . Moreover, there are no values of t T for which the interference fringes disappear. This illustrates that, in contrast to the case of maximally entangled two-qubit states [3], nontrivial topological phases can be obtained without going through a state orthogonal to the initial one. The coincidence intensity in Eq. (18) for selected values of t T is shown in the left panel of Fig. 6. Another cyclic evolution in the same homotopy class can be generated by the unitary operator U X2 (t) given by for 0 ≤ t ≤ T , where H is the Heaviside step function defined by H(x) = 0 for x < 0 and H(x) = 1 for x > 0. The coincidence intensity as a function of θ and t in this case is Again we can see the expected fringe shift π 2 with maximal fringe visibility for t T = 1. For the cyclic evolution generated by U X2 (t), as opposed to that generated by U X1 (t), the interference fringes disappear for 1 3 ≤ t T ≤ 2 3 , meaning that the evolution takes the system through states orthogonal to the initial state during the evolution. With respect to the evolution of the fringe visibility there are thus qualitatively different evolutions in the same homotopy class. The coincidence intensity in Eq. (20) for selected values of t T is shown in the left panel of Fig. 7. To verify that the fringe shifts are due to entanglement we consider the product state |ψ prod , defined in Eq. (12), which has the same probability distributions for the local degrees of freedom as the X state in both the {|H , |V } and the {|+ , |− } bases. The coincidence intensity as a function of θ, φ s , φ o , and φ i for |ψ prod , given that it is We can see that fringe visibility for φ s (T ) = φ o (T ) = φ i (T ) = −π is zero. For |ψ prod the only values of φ s , φ o , and φ i that gives maximal fringe visibility are 0 and 2π. Thus, the reappearance of maximal fringe visibility for the value −π of φ s , φ o and φ i with a π 2 fringe shift, is due to the entanglement of the X state. The coincidence intensity in Eq. (21) for the evolutions of |ψ prod generated by U X1 (t) and by U X2 (t) at selected values of t T is shown in the right panel of Figs. 6 and 7 respectively. Note that the cyclic unitary evolutions that give maximal fringe visibility for |ψ prod are also cyclic evolutions of the X state. This however holds only for the diagonal unitary operators we are considering. A more general cyclic evolution of |ψ prod is not typically a cyclic evolution of |ψ X . The set of cyclic evolutions generating the π-phase shift of the X state that are diagonal in the However since these are also cyclic evolutions of the product state |ψ prod the reappearance of maximal fringe visibility and π phase shift cannot be attributed to the entanglement of the X state. To observe a π phase shift that cannot be attributed to local degrees of freedom we must implement a cyclic evolution generated by unitaries that are not diagonal in the {|+ , |− } basis. We recall however that the X state is identical to the GHZ state in a different basis. For the GHZ state there are evolutions generated by diagonal unitaries leading to a π phase shift that can be attributed to entanglement. B. GHZ and biased GHZ state We consider the GHZ state |ψ ghz = 1 √ 2 | + ++ + The cyclic evolutions of these states that result in a π phase shift are generated by unitaries There are no evolutions generated by unitaries of this kind that takes the biased GHZ state through a state orthogonal to the initial state. Thus, the interference fringes never disappears. The GHZ state on the other hand is evolved through an orthogonal state. One unitary operator of this kind is U bghz (t) given by 3T . The coincidence intensity for the evolution of |ψ ghz generated by U bghz (t) will be a function of t and θ given by The coincidence intensity in Eq. (22) for selected values of t T is shown in the lower left panel of Fig. 8. The coincidence intensity for the evolution of |ψ bghz generated by U bghz (t) on the other hand will include an additional term and is given as a function of t and θ by C(t, θ) = C 0 1 + cos θ cos πt T − 1 2 sin θ sin πt T . The coincidence intensity in Eq. (23) for selected values of t T is shown in the upper left panel of Fig. 8. For both the GHZ state and the biased GHZ state the coincidence intensity depends only on the sum φ s (t) + φ o (t) + φ i (t) and not on the φ i (t) individually. For both states there is a reappearance of maximal fringe visibility at t T = 1 and the interference fringes are shifted by a π phase. To see the signature of the entanglement in the interference pattern of |ψ bghz , we also consider the product state which has the same probability distributions for the local degrees of freedom in the {|+ , |− } basis as |ψ bghz . The interference intensity as a function of θ and t when |ψ ′ prod is evolved by U bghz (t) is C(t, θ) = C 0 1 + cos θ cos πt 3T 1 − 7 4 sin 2 πt 3T Comparing Eqs. (23) and (25) we see that the reappearance of maximal fringe visibility for t T = 1 is absent for |ψ ′ prod . Moreover the fringe shift is not equal to π. Thus, the reappearance of fringe visibility and π phase shift is thus due to entanglement. The coincidence intensity for |ψ ′ prod , evolved by U bghz , is shown for selected values of t T in upper right panel of Fig. 8 alongside the coincidence intensity of |ψ bghz . To see the signature of entanglement in the interference pattern of |ψ ghz in Eq. (22) we compare with the interference patterns of |ψ prod . The coincidence intensity for |ψ prod is given by Eq. (21) and we can see that there is no reappearance of maximal fringe visibility for φ s (t) = φ o (t) = φ i (t) = 2π 3 . Thus, the reappearance for |ψ bghz can be attributed to entanglement and the phase shift is of topological nature. The coincidence intensity for |ψ prod , evolved by U bghz , is shown for selected values of t T in the lower right panel of Fig. 8 alongside the coincidence intensity for |ψ ghz . VI. CONCLUSIONS We propose an experimental scheme to observe the topological phases acquired by special classes of threequbit states. These phases reveal the nontrivial topological structure of the local SU(2) orbits. In particular, observation of the π/2 topological phase shift would be a signature of multiqubit entanglement as this phase exists only for more than two qubits. The experimental proposal is within the technological resources available in most quantum optics laboratories, and can be implemented in a short term. Furthermore, the insensitivity to continuous path deviations of the unitary evolution is a robust feature of the topological phases with potential applications to quantum information processing.
2013-01-23T15:37:14.000Z
2013-01-23T00:00:00.000
{ "year": 2013, "sha1": "e3ebdae0cfd87e65abbbc682050b55a40c87a424", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1301.5538", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e3ebdae0cfd87e65abbbc682050b55a40c87a424", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236255733
pes2o/s2orc
v3-fos-license
Balance conditions in variational data assimilation for a high‐resolution forecast model This paper explores the role of balance relationships for background‐error covariance modelling as the model's grid box decreases to convective scales. Data assimilation (DA) analyses are examined from a simplified convective‐scale model and DA system (called ABC‐DA) with a grid box size of 1.5 km in a 2D 540 km (longitude), 15 km (height) domain. The DA experiments are performed with background‐error covariance matrices ( B ) modelled and calibrated by switching on/off linear balance (LB) and hydrostatic balance (HB), and by observing a subset of the ABC variables, namely v (meridional wind), ρ˜′ (scaled density, a pressure‐like variable), and b′ (buoyancy, a temperature‐like variable). Calibration data are sourced from two methods of generating proxies of forecast errors. One uses forecasts from different latitude slices of a 3D parent model (here called the latitude slice method), and the other uses sets of differences between forecasts of different lengths but valid at the same time (the National Meteorological Center method). Root‐mean‐squared errors computed over the domain from identical twin DA experiments suggest that there is no combination of LB/HB switches that give the best analysis for all model quantities. However it is frequently found that the B ‐matrices modelled with both LB and HB do perform the best. A clearer picture emerges when the errors are examined at different spatial scales. In particular it is shown that switching on HB in B mostly has a neutral/positive effect on the DA accuracy at ‘large’ scales, and switching off the HB has a neutral/positive effect at ‘small’ scales. The division between ‘large’ and ‘small’ scales is between 10 and 100 km. Furthermore, one hour forecast‐error correlations computed between control parameters find that correlations are small at large scales when balances are enforced, and at small scales when balances are not enforced (ideal control parameters have zero cross‐correlations). This points the way to modelling B with scale‐dependent balances. INTRODUCTION Balance constraints of one form or another have been used to help formulate data assimilation (DA) problems for many years, e.g., Lorenc (1981); Derber and Bouttier (1999); Gao et al. (1999); Berre (2000); Ingleby (2001); Fisher (2003); Ge et al. (2012); Tong et al. (2016). Balance constraints are useful for a number of reasons, especially (though not exclusively) when used with variational and hybrid schemes. Firstly, they are used to dampen damaging unbalanced motion produced spuriously as a by-product of the analysis procedure. This is done by building them into the formulation of the static background-error covariance matrix, B (Parrish and Derber, 1992;Derber and Bouttier, 1999;Gauthier et al., 1999;Bannister, 2008), enforcing them as an extra constraint in the cost function (a J c term) (Hu et al., 2006;Kleist et al., 2009), and applying them post analysis explicitly or implicitly (Courtier and Talagrand, 1990;Lynch and Huang, 1992;Bloom et al., 1996;Potvin and Wicker, 2013). Secondly, when a balance constraint is imposed strongly, it reduces the number of degrees of freedom that a DA problem has to deal with. Thirdly, and specifically for variational and hybrid DA schemes, balance conditions guide the definition of the control variables, which are assumed to be mutually uncorrelated, thus defining a model of B (Bannister, 2008;Song and Kang, 2019). The choice of balance conditions (if any) relevant to a particular DA problem depends upon the properties of the flow, which in turn depend on the fluid's geographical location and on the scales of motion considered. For instance, geostrophic or hydrostatic balances -dominant in flows of small Rossby numbers typical of the large-scale extratropical free troposphere -are not always well suited to regimes that have a potentially high Rossby number -for example, tropical or small-scale flows (Sun, 2005;Sun et al., 2014;Yano et al., 2018), rain (Caron and Fillion, 2010), convection (Vetra-Carvalho et al., 2012), or high flow curvature (Fisher, 2003). Despite these concerns, balance relationships like the linear and nonlinear balance equations, and hydrostatic balance, whether in dynamical or regressed forms, are associated with the formulation of B for km-scale DA systems, for example, Barker et al. (2004); Honda et al. (2005); Brousseau et al. (2011);Ballard et al. (2016); Heng et al. (2020); Xu et al. (2020). As models have smaller and smaller grid lengths, this leads to obvious questions on whether it is an optimal, efficient, or appropriate use of balance conditions in such small-scale systems, and whether they can be dropped or replaced with alternative balance conditions which are more appropriate at convective scales. To date, there have been few systematic studies of 'turning off' balance conditions in B of variational or hybrid DA systems. There have been a number of studies that appear to have compared the performance of 3D-Var systems through a change of variational control variables from a set that exploits balance operators to a set that does not (Xie and MacDonald, 2012;Zakeri et al., 2018;Shen et al., 2019;Thiruvengadam et al., 2019;Wang et al., 2020). These studies did find benefits of the change but the change had been made at the same time as a change of momentum control variables from streamfunction and velocity potential-based variables to zonal and meridional wind-based variables. It remains an open question on which balance conditions, if any, remain appropriate for use with convective-scale DA. This paper revisits the problem of the use of geophysical balance to model B as used in variational and hybrid schemes when applied to midlatitude flows that contain small scales that are expected to have a considerable amount of unbalanced flow by their nature. The model used for most of this study is a high-resolution (1.5 km grid length) version of the simplified ABC model (Petrie et al., 2017), and its variational DA system, ABC-DA (Bannister, 2020), but some results use ensemble forecasts from a 1.5 km grid length version of the Met Office's Unified Model (UM) for a limited-area domain over the southern UK. The structure of this paper is as follows. In Section 2 we briefly describe the ABC-DA system. In Section 3 we summarise the balance relationships considered in this study. In Section 4 we review how they are used in ABC-DA. In Section 5 we show how two populations of forecast-error proxies have been generated, which are used to calibrate our B matrices. In Section 6 we discuss the potential problems resulting from inappropriate balance conditions in B and demonstrate that such problems are evident in some sample data from the convection-permitting UM and ABC systems. In Section 7 we define and show the ABC-DA experiments performed with different balance conditions applied to B. In Section 8 we relate these results to the reasoning made earlier in the paper. Finally in Section 9 we summarise our findings, discuss the possible limitations of this study, and outline further work that could be done. In addition, The Appendix summarises the balance conditions for the Euler equations. Even though our focus (for simplicity) is the ABC-DA system, the conclusions of this paper will be of interest to operational centres, especially those that use 3D/4D-Var and hybrid methods. THE ABC-DA SYSTEM The ABC model (Petrie et al., 2017) is a set of simplified fluid equations which were designed to permit speedy research into convective-scale DA. They are modified versions of the compressible Euler equations, designed to exhibit balanced motion at large scales, but unbalanced motion on smaller scales. The equations operate on a 2D longitude/height plane (x/z); they are partly linearised (advection terms and the mass continuity equations remain nonlinear); the Brunt-Väisälä frequency is taken to be a constant, A (to control gravity wave frequencies), the advection terms and the mass divergence terms are modulated by a parameter 0 < B ≤ 1 (to control acoustic wave speeds), and the equation of state is simplified to relate pressure and density perturbations via an inverse compressibility coefficient, C. The result is a set of five prognostic equations for zonal wind, u, meridional wind, v, vertical wind, w, scaled density perturbation,̃′ = ′ ∕ 0 (where 0 is reference density), and buoyancy perturbation, b ′ = (g∕ R ) ′ (where g is the acceleration due to gravity, R is the reference potential temperature, and ′ is the potential temperature perturbation). The equations are given as equation (15) of Petrie et al. (2017). Additionally, pressure increments may be diagnosed from the simplified equation of state, p ′ = C ′ . The equations are dry, but work is currently under way to include simplified moist processes. A is the pure gravity wave frequency and √ BC is the pure small-scale acoustic wave speed. The parameter B was introduced to slow the speed of the acoustic waves so that the equations can be solved using an explicit scheme (the split-explicit forward-backward scheme; Cullen and Davies 1991). In this paper, the parameters are set as follows: A = 0.01 s −1 , B = 0.01, C = 10 5 m 2 s −2 , and the Coriolis parameter is f = 10 −4 s −1 . The domain of the model is 540 km by 15 km and the grid is chosen to have 360 (1.5 km length) horizontal grid cells and 60 vertical levels, which is thought to be on the edge of the 'grey zone' where convective parametrizations are still needed in operational models (Yu and Lee, 2010) (although the ABC model presently captures only dry circulations). The model is thought to behave in a qualitatively similar way to the real atmosphere. Flows may be approximately decomposed into slow-moving/low-frequency balanced modes, intermediate frequency gravity modes, and high-frequency acoustic modes. A linear analysis of this model performed about a state of rest (not shown) reveals a zero frequency Rossby-like mode, which is in a state of geostrophic and hydrostatic balance, gravity modes with frequencies up to 0.01 s −1 (the value of A) and speeds approaching 20 ms −1 , and acoustic modes with frequencies up to ∼ 0.7 s −1 and speeds ∼ 30 ms −1 . The fast gravity and acoustic waves allow the model to adjust as a result of added perturbations, for example, due to DA. These aspects, including a demonstration of geostrophic adjustment in ABC, are discussed in Petrie et al. (2017). The associated DA (Bannister, 2020) is an incremental 3D-Var/3DFGAT-based variational system. The B-matrix is modelled with a set of parameter/horizontal/vertical transforms where the model variables are decomposed into balanced and unbalanced components which are assumed to be uncorrelated. The ABC-DA control parameters are streamfunction ( ), velocity potential ( vp ), geostrophically unbalanced scaled density (̃′ u , akin to unbalanced pressure in other systems), hydrostatically unbalanced buoyancy ( b ′u , akin to unbalanced temperature, which is not considered in other systems, for example, the Met Office's variational system, which assumes that all temperature increments are hydrostatically balanced), and vertical wind ( w, which is diagnosed from other variables in the Met Office's system). The scheme is flexible to allow the balance conditions used in the B-matrix to be switched on or off to study their effects on the analysis. Observations can be of any of the variables at arbitrary positions in space and time. The system includes a suite to calibrate the B-matrix from ensembles of possible background states, and a suite to run the assimilation in a cycled forecast/DA mode. SUMMARY OF BALANCE RELATIONSHIPS FOR THE ABC SYSTEM In this section, we summarise the balance relationships considered in this paper. Interest is ultimately concerned with the equation sets of operational systems, so we give two versions of the balance relationships: the balance relationships used for the ABC system are given here and those relevant to the Euler equations are given in the Appendix. Linear balance in ABC The linear balance equation (LBE) in ABC emerges from a scale analysis of the zonal momentum equation for small Rossby number (section 2.2 of Petrie et al. (2017)): where a prefix indicates an increment, and a b superscript indicates the balanced part of the variable 1 . Since f is constant in ABC, Equation (1) is equivalent to geostrophic balance. 1 Note that in the ABC system, where there is no latitude dependence, the Helmholz relations lead to v = ∕ x and u = vp ∕ x, where is the streamfunction increment and vp is the velocity potential increment. Hydrostatic balance in ABC The hydrostatic balance equation (HBE) emerges from a scale analysis of the vertical momentum equation for either small Rossby number or for a small ratio of vertical to horizontal wind magnitude: A strongly imposed hydrostatic balance eliminates vertically propagating acoustic waves. In ABC-DA though, and unlike other systems, some hydrostatically unbalanced analysis increments are allowed (Bannister, 2020). USING THE BALANCE RELATIONS IN ABC-DA The ABC-DA variational scheme is documented in Bannister (2020), but the relevant parts of the control variable transform (U), which define the B-matrix, are summarised as follows. As mentioned in Section 2, the control parameters of ABC-DA are , vp ,̃′ u , b ′u , and w. These parameters are assumed to be mutually uncorrelated (in the sense of background errors) for the purposes of modelling B. These parameters are fields that still have spatial covariances, but are related to the associated control variables ( , vp ,̃′u , b ′u , and w respectively) via the spatial transform, U s : Control variables are taken to have no auto-covariances and to have unit background variances. The U s , etc., form the block-diagonals of U s , and are the square-roots of the background-error auto-covariances; for example, U s U s T is the auto-covariance matrix of . Control variable transforms are described more fully in Bannister (2008). The key parts of the transforms relevant to this paper are as follows (see also Table 1). The total scaled density increment,̃′, is found as the sum of balanced,̃′ b , and unbalanced,̃′ u , parts where the balanced part is found from via the LBE (Equation (1)) (steps II and III of Table 1):̃′ The factor is introduced to turn on ( = 1) or turn off ( = 0) the effect of the LBE in the covariance model. The total buoyancy increment, b ′ , is similarly found as TA B L E 1 Summary of the parameter transform of the convective-scale scheme detailed in Bannister (2020). Control parameters are underlined, and and are switches described in the text VI. Vertical wind w the sum of balanced, b ′b , and unbalanced, b ′u , parts where the balanced part is found from̃′ via the HBE (Equation (2)) (steps IV and V of Table 1): The factor is introduced to turn on/off the effect of hydrostatic balance. Note that only = 0, 1 and = 0, 1 are considered in detail in this paper, although in the summary we consider treating and as continuous variables as a possible extension. These steps are similar to the Met Office's control variable transform (Lorenc et al., 2000;Ingleby, 2001), except that the Met Office (a) has an extra vertical regression step after application of the LBE, and (b) does not allow an unbalanced temperature increment (equivalent to always setting b ′u = 0 in ABC). The vertical regression step acts to complete the prediction of the 'balanced' pressure and can allow for inapplicability of enforcing the LBE directly. The vertical regression step is an optional part of ABC-DA, but is not applied here because the earlier study of Bannister (2020) found it to degrade the accuracy of the ABC analysis, and interest in this paper is the application of purely analytical balance relationships. The assumption that the control variables are uncorrelated leads to the implied B-matrix of this system. For instance, when = 1 the implied background-error variance for scaled density emerges from Equation (3) as follows: bal./unbal. scaled density covariance, zero by assumption (5) where ⟨•⟩ is the expectation over the hypothetical background probability density function 2 . The last term in Equation (5) is zero if the zero-correlation assumption of the covariance model is correct. Since this is assumed in the DA, the implied covariance comprises only the first two terms of Equation (5). If exploration of the background-error statistics outside of the DA environment show that this term is non-zero, then this term represents the covariance model's anomaly, and will represent a sub-optimality in the DA. A similar argument applies to b ′ . When = 0 the balanced variance is not present and so the implied variance is purely unbalanced and the implied covariance between and̃′ is lost. ORIGIN OF THE TWO CALIBRATION ENSEMBLES Forecast-error statistics are affected by the observation network, the DA system, the length of the prior forecasts, and the errors in the formulation of the model (e.g., Houtekamer et al., 1996), including representativity error (Hodyss and Nichols, 2015) (Model and representativity errors are absent in this study as we perform only identical twin experiments to test the DA.) In this study we follow the standard procedure of attempting to calibrate B using a sample from these statistics. However, the generation of such a sample of forecast errors is difficult to do from scratch since it requires the B-matrix to define the DA in the first place (to generate the forecasts' initial conditions). This is a 'chicken-and-egg' problem. For this reason, we accept that there is no easy or perfect solution and we proceed pragmatically. We propose two methods of defining populations of forecast errors used to calibrate B, and compare how each affects the performance of the DA system. Each method is described below. 5.1 The latitude slice method The first method takes sequences of u and v fields from different latitudes of a dump of the Unified Model with the same horizontal grid length and domain size. Each slice is modified so that it obeys periodic boundary conditions (section 5.2 of Petrie et al., 2017). The w field is found by then imposing zero three-dimensional divergence. Sincẽ ′ and b ′ are not part of the UM, these are found from the balance conditions (first̃′ is found by integrating the LBE given v, and then b ′ is found from the HBE giveñ′). This processing from each latitude provides a set of ABC initial conditions, which are then run through the ABC model for 1 hr. The resulting population of states forms an ensemble of 260 plausible model forecasts, and the deviations from the mean are assumed proxies for forecast errors. We believe that the 1 hr integration is long enough to spin up/ down imbalances in the system as revealed by studying variance spectra of balanced and unbalanced̃′ at a function of lead time. These converge quickly to the 1 hr spectra shown in the next section. The NMC method The second method derives proxy forecast errors using the standard National Meteorological Centre (NMC) method (Parrish and Derber, 1992) by taking a population of 260 2 hr minus 1 hr forecasts (divided by √ 2) from the cycled ABC-DA system. As this method relies on an existing DA system (including the B-matrix), we are faced with a pragmatic choice and choose this underlying system to be that calibrated with statistics from the latitude slice method where all balances in the B-matrix are switched off. This is an arbitrary decision, but investigating the effect of the underlying DA system on the NMC method is outside the scope of this work, but for now does provide an alternative set of ABC-DA results to study. Even though the NMC method is widely used, it does not yield perfect proxies of forecast errors (Berre et al., 2006). For instance, as the method uses differences between forecasts valid at the same time, it is likely to underestimate errors in unobserved quantities (in this study u and w are unobserved; Section 7). POSSIBLE CONSEQUENCES OF USING BALANCE CONDITIONS INAPPROPRIATELY The model of B that gives Equation (5) is useful only if ′b as found from (Equation (1)) has a strong correlation with̃′, and the residual,̃′ u , is uncorrelated with ′b . This may happen bỹ′ b and̃′ u having very different time-scales (i.e., by the gravity modes being associated with faster processes than Rossby modes as discussed in Section 2). This is the situation in the schema shown in Figure 1a, which is the case expected for large-scale, midlatitude dynamics where balanced increments may be thought to define a balanced manifold where unbalanced increments form a 'fuzzy' region around that manifold. Alternatively wheñ′ b as found from does not predict well̃′, there will be a largẽ′ u to compensate for the bad̃′ b prediction. Consequentlỹ′ u and will become correlated (even though they are still assumed to be uncorrelated in the covariance model). This is the situation in Figure 1b-d. In cases (b, c),̃′ b consistently under/overestimates̃′ (representing cases when the balance relation is applied not strongly enough/ too strongly) and in case (d)̃′ b is completely unrelated tõ′ (representing cases when there is no useful information in the applied balance relation). The implied variance for̃′ is the sum of the balanced and unbalanced contributions (only the first two terms in Equation (5)), which may be an under-or overestimate of the true variance, meaning that observations of scaled density will be under-or overfitted by the DA, respectively. The quantity that demonstrates that a covariance model is not appropriate is the last term in Equation (5) (the anomaly), 2 ⟨̃′ b̃′u ⟩ . For illustration, Figure 2 shows the contributions to the mass variances at about 4 km elevation as a function of scale as found from ensembles of the UM (a, b at two different times 1230 and 1630 UTC on 20 September 2011) and ABC forecasts (c, d found from the latitude slice and NMC methods; Section 5). The mass variable is pressure and scaled density in the UM and ABC systems respectively, the LBEs Equations (A2) and (1) are used to derive the balanced masses, and deviations from the means in these forecasts are proxies for background error (the Figure 2 caption gives details). Between the times 1230 and 1630 a cold front bringing precipitation passed through the domain. Note that, even though data from the UM are shown in Figure 2 (a, b), the Met Office procedure of accompanying the balance relation by vertical regression is not performed here. In (a) the balanced mass variance (dashed black line) is consistently smaller than the total variance (black continuous line), apart from at the largest scales where the balanced mass dominates 3 . The unbalanced variance (dashed grey line) is mainly coincident with the total variance. The anomaly of the covariance model (continuous grey line) is of the same order as (but slightly larger than) the balanced 3 The balanced mass at the largest scale is manually set to the level mean total mass. This is the undetermined integration constant in computing the balanced mass and means that the unbalanced variances are precisely zero at this scale. plane is two-dimensional (so the total horizontal wavenumber is √ k 2 x + k 2 y ), and the ABC model's horizontal plane is one-dimensional. The downward spikes in the UM data are believed to be an artifact of the total wavenumber binning. The mass variables are pressure and scaled density in the UM and ABC respectively, and the variances are averaged over a few levels around 4 km in height. The UM spectra are found from 24-member ensembles of 1 hr UM forecasts initialised from an Ensemble Transform Kalman Filter (Baker et al., 2014), where the balanced pressure is derived from the UM's horizontal winds (Appendix A1) by imposing Neuman boundary conditions. The ABC spectra are found from the two 260-member ensembles as detailed in Section 5 as used later to calibrate the DA system. The balanced scaled density is found from Equation (1). Note that, where the 'unbalanced' line is not visible, it overlaps with the 'total' line (as indicated), and enlarged parts of the spectra are shown in (b) variance, except for the very largest scales. Where the anomaly is much smaller than the total (∼ 20 to ∼ 200 km), scenario (a) applies approximately in Figure 1. At smaller scales we suggest that scenario (b) applies, owing to the small balanced variance. In (b) there is a more varied ordering of the lines. Unlike in (a), the balanced variance does not take the lowest value, except marginally for scales between 40 and 80 km. However, at all other scales apart from the largest, the anomalous variance dominates. This suggests either scenarios (b) or (d) of Figure 1 are relevant for this panel. In (c), where the latitude slice data are studied for the ABC model, the total and unbalanced variances dominate over all scales, apart from the largest scale, where only the total variance dominates, rather like the UM in (a). There appear to be three scale regimes. At the largest scales (above 300 km), the anomalous variance is the smallest. Between scales 30 and 300 km, the anomaly is of the same order of magnitude as the balanced variance, but at scales smaller than 30 km the balanced variance naturally takes smallest values, leaving the unbalanced variance to take all the variance. The small value of the anomaly at all scales suggest that scenario Figure 1a is valid, although for many scales it is because the balanced variance is naturally negligible anyway. In (d), where the NMC variances are shown, there are some qualitative similarities and differences with (c). As in (c), the total and unbalanced variances dominate and the balanced variance has the smallest contribution over all scales, apart from at the largest scales. In (d) the anomalous variance still contributes a significantly larger proportion than the balanced variances over a wide range of scales, but is still orders of magnitude smaller than the total variance, hinting that the covariance model may still be appropriate. Note that, despite the very different scales of the y-axes between (c) and (d) when spectrally resolved, the standard deviations of the NMC̃′ errors are not smaller by orders of magnitude; the typical standard deviations of̃′ errors from the NMC method are about one third of those from the latitude slice method. Where anomalous variances comprise a significant proportion of the total variance, this suggests that the LBE is not appropriate for the B model. These results show that, in terms of the validity of the LBE, there are some similarities and differences between the behaviours of a realistic 3D model like the UM and a 'toy' model like ABC, but that the applicability of the LBE in covariance modelling is likely to be less useful in a real system than in ABC. The rest of this paper is concerned only with ABC, but the above differences must be borne in mind when judging the relevance of our ABC DA results later in the paper to more realistic systems. DATA ASSIMILATION EXPERIMENTS WITH ABC WITH DIFFERENT BALANCE OPTIONS In Section 6 we showed that the way that background errors are modelled with balance conditions may often be inappropriate and so requires further examination. In this section we describe and show results from a series of multi-cycle DA OSSEs (Observation System Simulation Experiments) with the ABC-DA system to see if linear and hydrostatic balances provide any advantage in covariance modelling at convective scales. This is done by switching on and off these balances in the B-matrix so that the total̃′ or b ′ increments are best represented either as the sum of balanced and unbalanced components as shown in Section 4 (e.g., = 1, = 1), or each as just one component (the unbalanced component) describing the whole increment (e.g., = 0, = 0). Turning off a balance relation may indeed allow a more realistic autocovariance for either̃′ or b ′ , but this may come at the expense of destroying any multivariate components in the implied B-matrix. For each combination of and , and for each population of forecast errors described in Section 5, the spatial transforms mentioned in Section 4 are re-calibrated in the way described in Bannister (2020). Description of the experiments We perform two sets of four DA experiments. The first set uses a the latitude slice method as a population of possible forecast errors to calibrate B, and the second set uses the NMC method. The four DA experiments in each set are as follows: (a) = 0, = 0 (no balance equations used), (b) = 0, = 1 (only HBE used), (c) = 1, = 0 (only LBE used), and (d) = 1, = 1 (LBE and HBE used). The DA cycling period is 1 hr, where the analysis from each cycle is used to initialise a 1 hr forecast to yield the background of the next cycle, and 30 cycles are made. The first background is formed from the initial truth state with an initial perturbation of five times a random background error drawn from the modelled B-matrix. Observations of v,̃′, and b ′ are made at the start of each cycle, each on a 90 × 30 grid spanning the lower boundary to 12 km. The observation-error standard deviations for v,̃′, and b ′ are 0.5 ms −1 , 10 −3 and 1.5 × 10 −2 ms −2 , respectively. Triple observation assimilation increments In order to help understand the impact of assimilating observations of v,̃′, and b ′ in each of the four DA experiments, Figure 3 shows the analysis increments of assimilating an individual triple observation of v,̃′, and b ′ . For this experiment, the B-matrix derived from the latitude slice method is used 4 , and components of the innovation vector are set to the respective observation-error standard deviations. When = 0 and = 0 (first column), the increments are formed independently as the B-matrix in this case is univariate. The increments peak at the observation locations (crosses) and have negative side lobes. The different quantities have different length-scales, found from the calibration. When the HBE only is used to model the B-matrix ( = 0 and = 1, second column), the difference from the first column is mainly in thẽ′ increment, but the b ′ increment is also slightly affected. This is due to the HBE coupling these two quantities as Equation (2). When the LBE only is used ( = 1 and = 0, third column) the differences from the first column are with the v and̃′ increments. These increments have lost their horizontal symmetry: thẽ′ observation will encourage a dipole in v in order to maintain linear balance, but the v observation will encourage a positive v at the cross, thus shifting the dipole horizontally. When both HBE and LBE are used ( = 1 and = 1, fourth column) all fields are coupled and so all observations will affect all fields, giving rise again 4 Results for the NMC-derived B-matrix show smaller magnitudes to those in Figure 3, but of similar structures. to the asymmetry. The question is whether imposing such structures has a beneficial or harmful effect in cycled DA for this system. Figure 4 shows the domain-averaged root-mean-squared errors (RMSE) of each model quantity (rows) as a function of time for 0-to-1 hr forecasts starting from each cycled analysis (the times of the vertical yellow lines). The left column is for the systems calibrated with the latitude slice method and the right column for the NMC method. Recall that v,̃′, and b ′ are observed at the start of each cycle, but the unobserved quantities u and w are affected via the forecasts for each new background state. For conciseness, the experiments will be referred to as [ , ]. Variation with time of domain-averaged analysis errors According to these statistics, there is no clear picture regarding which configuration of the B-matrix, [ , ], has lowest RMSE as the best and worst configurations differ between quantities and calibration methods. As the systems are quite complex, it is usually not possible to easily explain why some of the results appear, but there is still potentially useful information in just describing them. • For errors in u: in (a1) [0, 0] and [1, 1] are the best settings, but in (a2) only [1, 1] is the best. The fact that in (a1) turning on LB and HB separately worsens the RMSE, but turning them on together restores the performance must be due to the interaction between the two settings. This permits useful v-b ′ and̃′-b ′ covariances which enable extra information to be extracted from the observations, perhaps in some sense cancelling out the individual LB and HB related structures (see the full implied covariance matrix, equation (33) of Bannister (2020) with (a further switch) set to zero in that equation). Note that, since u is always univariate and unobserved, it is not updated by the DA, and so information is propagated from these quantities to u by the ABC model from one cycle to the next. A similar discussion to that for u applies, but here the covariances can affect v directly. • For errors in w: in (c1) [0, 1] and [1, 1] are the best settings, but in (c2) only [1, 1] is the best. All of these settings invoke HB covariances. As for u, w is always univariate and unobserved and so is affected indirectly. In general, the best compromise configuration is [1, 1] (the exception being the b ′ analysis, especially with the latitude slice calibration method). The same conclusions are reached by studying 1-to-2 hr forecast errors (instead of the 0-to-1 hr forecasts above; not shown). Spectrally resolved analysis-error variances In order to shed some light on these results, Figure 5 shows snapshots (t = 15 hr) of error variance spectra for each quantity averaged over all vertical levels. The spectra are found by Fourier transforming the errors at the snapshot time to give (k, z) ( ∈ { u, v, w,̃′, b ′ }), where k is the wavenumber and z is the level, and then averaging [ (k, z) * (k, z) ] 1∕2 over z. The spectra are plotted as a function of wavelength, 2 ∕k. This is done for each quantity, for each LB/HB switch configuration, and for the latitude slice (left column) and NMC (right column) methods of calibrating B. • For spectral error variances in u: in (a1) [0, 0] and [1, 0] are the best settings, which are evident at small scales, but in (a2) [1, 1] is marginally the best setting, which is evident at large scales. Given the log scales, and the fact that the variances reduce with scale, any settings that show superiority only at small scales will not necessarily be seen in the domain averages in Figure 4. This is relevant when comparing Figure 5a1 with Figure 4a1. The result for (a2), showing superiority of [1, 1], can though be seen in Figure 4a2, since the difference is at large scales, which contribute most to the overall error variance. Figure 5a1 suggests that enforcing HB can be harmful if one is interested in small-scale analyses of u. • For spectral error variances in v: in Figure 5b1 all settings appear to perform equivalently, but [0, 1] shows a slight increase in error at large scales. This is consistent with Figure 4b1 at t = 15 (computing similar spectra for t = 18 (not shown), reveals [1, 1] is the best setting at large scales, which is consistent with Figure 4b1 at that time). In Figure 5b2 there is a contrast between large and small scales -at large scales [0, 1] and [1, 1] are best (consistent with Figure 4b2), but at small scales [0, 0] and [1, 0] are best. The Figure 5b2 result suggests that enforcing HB can be helpful if one is interested in large-scale analyses of v, but harmful if one is interested in small-scale analyses. • For spectral error variances in w: in Figure 5c1 there is also a contrast between large and small scales -at large scales [0, 1] and [1, 1] are the best (consistent with Figure 4c1), but at small scales [0, 0] and [1, 0] are best. In Figure 5c2 [1, 1] is best at large scales (consistent with Figure 4c2). In general these results again suggest a helpful role of HB in the covariances at large scales, but a harmful one at small scales. As w is not observed, these effects follow indirectly. • For spectral error variances in b ′ : in Figure 5e1 the best settings are [0, 0] and [1, 0], which appear at small scales, and in (e2) the best settings are (marginally) [0, 1] and [1, 1] at large scales. Both of these results are consistent with Figure 4 e1,e2 at t = 15 hr. Once again, in general we find a helpful role of HB in the covariances at large scales, but a harmful one at small scales. Although the performance of the different balance settings sometimes varies between the two calibration methods, there is a frequently observed characteristic that the HBE in particular remains a useful relationship to use above scales 10 to 100 km, and sometimes is harmful below these scales. CROSS-CORRELATIONS BETWEEN CONTROL PARAMETERS In order to relate the results of Section 7 to the discussion in Sections 4 and 6, we briefly look at the correlations between the control parameters. Recall that anomalous F I G U R E 6 Cross-correlations of proxy 1 hr forecast errors of a selection of control parameters in spectral space. Proxies from the latitude slice method are shown in the left column and the NMC method in the right column (Section 5). The experiments are = 0, = 0 (with control parameters and̃′ in (a1,a2);̃′ and b ′ in (c1,c2)) and = 1, = 1 (with control parameters and̃′ u in (b1,b2); ′u and b ′u in (d1,d2)). The correlations are computed for each level and then averaged vertically over all levels (co)variances appear when there are correlations between the control variables. It was taken that assuming these correlations are zero when they are actually non-zero leads to sub-optimalities in the covariance model, possibly leading to enhancement of errors in the DA. Figure 6 shows correlations (in spectral space) between a selection of control parameters calculated with each calibration method to show how these change with scale and with the different balance configurations used (for brevity, only [0, 0] and [1, 1] are considered). The first row is the correlation of with̃′ for [0, 0]. The correlations are close to zero for the latitude slice (a1) and NMC (a2) methods at scales smaller than 30 km, but deviate away from zero at larger scales (the NMC method in (a2) shows a slight negative correlation trend, which is puzzling). The second row is the correlation of with̃′ u for [1, 1]. Clear from a comparison of the panels is that, whether or not the LBE is used, there is little effect on the correlations at small scales. Figure 6b1 shows that the large-scale non-zero correlations for [0, 0] are reduced. This is consistent with the lowering of the large-scale analysis errors of̃′ in Figure 5(d1) when the LBE is exploited (compare black and dotted purple lines). The NMC correlations are made very slightly more negative at large scales, even though there is also a reduction in analysis errors of ′ in Figure 5d2 when the LBE is exploited. The third row is the correlation of̃′ with b ′ for [0, 0]. The correlations for the latitude slice (c1) and the NMC (c2) methods are similar and are distributed about zero. The fourth row is the correlation of̃′ u with b ′u for [1, 1]. Figure 6d1, d2 both show the emergence of significant correlations between these unbalanced parameters at small scales, and a tightening of correlations about zero at larger scales, which is consistent with an improvement of a number of quantities in Figure 5 (most notably Figure 5c1), but also in other panels where the dotted purple lines are lower than the black lines at small scales and/or the reverse at large scales). 9 DISCUSSION, CONCLUSIONS AND FUTURE DIRECTIONS Discussion and conclusions This paper explores some of the dynamical issues regarding background-error covariances in convective-scale DA. This has been done primarily using the simplified (longitude/height domain) ABC model and its 3D-Var DA system (known as ABC-DA; Bannister 2020). This paper is likely to be of interest to centres developing or operating convective-scale variational (3D or 4D) or hybrid DA systems. Particular questions relate to the use or not of balance equations to model background-error covariances, B. We test whether separating fields into their balanced and unbalanced components (Section 3) can allow these motions to be treated separately in DA (as separate, uncorrelated, control parameters). Questions arise regarding their dynamical and statistical independence when the validity of the balance relationships is thought to be invalid, such as for linear balance (LB) and hydrostatic balance (HB) at km scales of motion. A conceptual framework is described to show how correlations that are present between such control parameters (but unaccounted for) will lead to anomalies in the background-error covariance model and to sub-optimalities in analyses (Sections 4 and 6). An analysis of some proxies of 1 hr forecast errors of the Met Office system and of the ABC system do suggest for instance that there could be significant anomalies in the background-error covariances in the UM and ABC systems, at least for the mass field. A more definitive answer to whether the use of balance relationships at convective scale helps or hinders the DA problem is to perform cycled DA experiments (Section 7). A set of cycled ABC-DA identical-twin experiments have been performed on a midlatitude limited-area domain with a grid size of 1.5 km where LB and HB have been systematically switched off and on in B via the respective switches = 0, 1 and = 0, 1, indicated by [ , ]. Each experiment uses a separately calibrated B-matrix with training data from one of two methods: the 'latitude slice method' where proxies of forecast errors come from an ensemble of 1 hr forecasts whose members originate from latitude slices of a parent 3D model (a UM file), and the NMC method where proxies of forecast errors are differences between 2 hr and 1 hr ABC forecasts. The DA cycling time is 1 hr and observations of v,̃′ and b ′ are assimilated (analogous to observations of wind, pressure and temperature respectively). Domain-averaged root-mean-squared errors indicate that there is no clear combination of LB and HB balance settings which gives the best results for all quantities. For instance [1, 1] performs better than [0, 0] for w and̃′ for both calibration techniques, but [1, 1] is worse than [0, 0] for b ′ , but only for the latitude slice calibration method. The [1, 1] configuration is arguably the most successful overall for both calibration techniques. The positive effect of HB especially on w is particularly interesting as w is not observed and is a univariate variable in the DA. The positive effect is due to favourable influences of the assimilation with HB covariance structures on other variables and their coupling to w via the forecasts. Looking at the contributions to errors at different scales reveals that using HB in particular in B is rarely a disadvantage at 'large scales' and not using balance is never a disadvantage at 'small scales'. The w errors in particular are interesting as, although HB provides an advantage on average, there is a clear disadvantage at 'small' scales (seen in the latitude slice calibrated results) and an advantage at 'large' scales (seen in both calibration techniques). The dividing line between 'large' and 'small' scales is found to be 10-100 km (depending on the quantity and calibration method). Scale-separated correlations between control parameters do show that HB does have uncorrelated control parameters at large scales, but univariate control variables (no balances enforced) lead to uncorrelated control parameters at small scales (Section 8). However, in the case of the LBE, setting = 1 naturally leads to a very small magnitude of̃′ b at small scales anyway, and so here it usually does not matter whether = 0 or 1 in the ABC system. Further analysis and possible future work The results studied in this paper have focussed on switching on/off the LBE and HBE with and . Instead of treating and as switches (0 or 1) it is also possible to treat and as continuous variables, which can modulate the balanced components of the fields. In particular, one may choose 'optimal' values, which minimise the variance of the unbalanced variables and eliminate the cross-covariances between the balanced and unbalanced variables as found from training data (A.C. Lorenc, personal communication, 2021). These optimal values are Here the angled brackets represent an average over the training data and recall that (f ∕C) is the linearly balanced scaled density (Equation (1)), and −C̃′∕ z is the hydrostatically balanced buoyancy (Equation (2)). The opt factor is like a simplified version of the vertical regression operator which the Met Office applies with the linear balance operator (see text after Equation (4)). However, a rudimentary test of the configuration [ opt , opt ] in the data assimilation (involving a re-calibration) using both training ensembles -the latitude slice and NMC methods -did not result in a general decrease of RMSE in the analyses (not shown). Instead the [ opt , opt ] analyses were often found to be less accurate than the main settings This counterintuitive result may actually highlight a possibile difficulty with the training data rather than being due to a breakdown of the idea. This points to further work to generate better training ensembles, which more closely resemble forecast errors. Even though we know the 'truth' in our experiments, we do not know the true B-matrix, but it may be possible to approach it by bootstrapping with multiple iterations of the calibration. This presents an interesting possibility that, as the imposed B-matrix improves, the DA system may well become better at correcting one set of scales over another. For instance, if the large scales become better captured, then the analysis and forecast errors will become more unbalanced, which has reportedly been seen in NWP systems over recent decades (A.C. Lorenc, personal communication, 2021). We note that other major aspects, like the NWP models and the observing systems, have improved over that time as well as the ability to represent forecast-error covariances. Returning to the interesting scale-dependence of the errors found in this paper's results, this suggests a multi-scale approach to the problem of background-error covariance modelling. One way forward is to use duplicate sets of control parameters -one set for small scales where no balance relations would be used (i.e., , are set to zero or small values), and another set for larger scales where the balance relations would remain as they are in current systems (since large-scale errors would still need to be corrected in a convective-scale system). A transition between the two could occur between 10 and 100 km wavelength, and the mathematical framework for this could be a two-band waveband transform (Fisher and Andersson, 2001;Deckmyn and Berre, 2005;Bannister, 2007;Pannekoucke et al., 2007), with the smaller-scale band using univariate parameters (no balance relationships) and the larger-scale band using the balance relationships. These results may also be useful for ensemble-based convective-scale DA systems where efforts made in the field of B modelling as in this paper could be used as a multivariate and scale-dependent localisation scheme (Caron and Buehner, 2018). Although we hope that this work is useful to the numerical weather prediction community, it has obvious limitations. Although the the ABC equations exhibit scale-dependent balance characteristics (Petrie et al., 2017), it is a reduced-complexity, reduced-dimensionality model with dry dynamics. Incorporating moist processes into the model is work in progress and it will be interesting to see if similar results are obtained in such a revised model. The conclusions are also limited by the imperfect way that proxies of forecast errors (used to calibrate B) are generated. It will also be interesting to see how the results change with latitude, for example, approaching the Equator, where the unbalanced motions would be expected to have a larger effect. APPENDIX A. SUMMARY OF BALANCE RELATIONSHIPS FOR EULER'S EQUATIONS For reference purposes, this Appendix gives the balance equations for Euler's equations. A.1 Linear balance in Euler's equations The linear balance equation (LBE) derived from Euler's equations relates the balanced pressure (p b ), the horizontal wind (u h ), and the balanced specific mass ( b ): where ∇ h and ∇ are the horizontal and three-dimensional gradient operators respectively, f is the Coriolis parameter, k is the vertical unit vector, and u h = (u, v, 0). It is assumed that u h is completely balanced and so does not need a 'b' superscript. Equation A1 is equivalent to geostrophic balance when f and b are constant. For incremental DA the linearised form of Equation (A1) is needed: where the -prefixed variables are increments, and other variables comprise the linearisation state, for example, b = b + b . The Met Office currently uses Equation (A2) with the assumption that b has a negligible contribution.
2021-07-26T00:06:03.447Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "35033feb188426f50b1be5baec7da439f40a1ac4", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/qj.4106", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "78d2c45c1a764d0e643c3c9bab0ac30dfeed1af9", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
235301448
pes2o/s2orc
v3-fos-license
Transmembrane TNF and Its Receptors TNFR1 and TNFR2 in Mycobacterial Infections Tumor necrosis factor (TNF) is one of the main cytokines regulating a pro-inflammatory environment. It has been related to several cell functions, for instance, phagocytosis, apoptosis, proliferation, mitochondrial dynamic. Moreover, during mycobacterial infections, TNF plays an essential role to maintain granuloma formation. Several effector mechanisms have been implicated according to the interactions of the two active forms, soluble TNF (solTNF) and transmembrane TNF (tmTNF), with their receptors TNFR1 and TNFR2. We review the impact of these interactions in the context of mycobacterial infections. TNF is tightly regulated by binding to receptors, however, during mycobacterial infections, upstream activation signalling pathways may be influenced by key regulatory factors either at the membrane or cytosol level. Detailing the structure and activation pathways used by TNF and its receptors, such as its interaction with solTNF/TNFRs versus tmTNF/TNFRs, may bring a better understanding of the molecular mechanisms involved in activation pathways which can be helpful for the development of new therapies aimed at being more efficient against mycobacterial infections. Tumor Necrosis Factor (TNF) and Tumor Necrosis Factor-α Converting Enzyme (TACE) TNF was described for the first time in the middle of 1970; it is a polypeptide considered a potent pro-inflammatory cytokine and encoded in the major histocompatibility complex in human and mice, and it is produced by immune system cells both myeloid and lymphoid origin [1][2][3]. TNF is synthesized as a monomeric protein and stored in vesicles, which use the route of rough endoplasmic reticulum (ER) to cross the cytoplasm. TNF monomers form a compact trimer through non-covalent interactions; the TNF-trimeric has high thermodynamic stability, the molecular mass of the human TNF is 50.4-kDa and murine TNF 50-kDa [4,5]. Active TNF is also expressed as a trimeric transmembrane form on the cell surface (hereafter tmTNF); after an activation stimulus, tmTNF is proteolytically processed, and a soluble form is released (hereafter solTNF); both tmTNF and solTNF display physiological functions [6]. TACE or also called A Disintegrin and Metalloproteinase (ADAM) domain 17 (ADAM17), cleaves tmTNF between residues Ala76 and Val77 to obtain solTNF [7]. TACE is the only, or at least the major sheddase of TNF in vivo, other ADAM family members such as ADAM10, ADAM9, and ADAM19 have been shown to shed TNF only in vitro and the cleavage site does not match with the physiologically relevant site [8]. TACE is not a TNF specific protease; several other proteins such as transforming growth factor-beta (TGF-β), beta-amyloid precursor protein, and TNF receptors are released by the TACE TNF/TACE as it helps TACE to get the plasma membrane [27]. Data indicate that Ubiquitin (Ub)-conjugating enzyme variant 1A (Uev1A) polyubiquitinates iRHOM2 promoting TACE maturation ( Figure 1, dotted purple line). However, the role of iRHOM2 is not limited to induce TACE maturation. The phosphorylation of the iRHOM2 cytoplasmic tail by MAP kinases (p38, JNK and ERK1/2) is a crucial step to expose TACE proteolytic site activating its sheddase function [14,28] (Figure 1, purple box). Tumor Necrosis Factor Receptors Two receptors have been identified to mediate interactions with TNF, tumor necrosis factor receptor 1 (TNFR1), also called CD120a and p55 (its molecular weight is 55 kDa), and tumor necrosis factor receptor 2 (TNFR2), also called CD120b and p75 (its molecular weight is 75 kDa) [29]. TNFR1 and TNFR2 are not specific to TNF, they interact also with lymphotoxin alpha (LTα, previously known as TNFβ). LTα is a cytokine closely related to TNF, activated by similar stimuli than those activating TNF, produced mainly by lymphoid cells in a soluble form and can combine with LTβ interacting with another different receptor, LTβR [30]. TNF can promote cancer cells' proliferation [18]. NRD increases TNF shedding; consequently, NF-κB signalling pathway and pro-inflammatory microenvironment, characterized by the presence of interleukin (IL)-1β, IL-6 and prostaglandin E2 are activated. In turn, STAT3 is phosphorylated by the autocrine function of IL-6, and growth-related genes are upregulated [19] (Figure 1, dotted green line). Several questions remain open, for instance, if a specific extracellular signal is necessary or not to induce the complex NRD/TACE or if ADAM10 can or not cleave TNF in vivo. Reports also suggested that solTNF upregulates the transcription factor AP-2α. However, the TACE promoter contains an AP-2α binding sequence, and it can bind in a TNFdependent manner, indicating that solTNF downregulates TACE expression and function because AP-2α is enhanced [23] (Figure 1, dotted red line). The role played by AP-2α in the synthesis and function of TACE is controversial. Evidence showed that TNF induces caspase-6 activation, which in turn cleaves AP-2α, suggesting that TNF downregulates AP-2α by a caspase 6-dependent pathway [24]. It is possible that the concentration of solTNF and tmTNF, or maybe one of the receptors, could be responsible for determining the up-or down-regulation of AP-2α and its consequent role. (3) P2 and iRHOM2 pathways. The P2 purinergic receptors have bi-functional effects on TNF release. On the one hand, P2X receptor activation attenuates TNF release and simultaneously, on the other hand, P2Y induces TNF release [25]. ATP induces intracellular Ca 2+ rise by P2X7-dependent pathway, activating kinase p38 and finally promoting TACE's release into exosomes. It could be a mechanism to shed membrane proteins to neighboring cells, thus propagating inflammation [26] (Figure 1, dotted black line). iRHOM2 (rhomboid 5 homolog 2) or RHBDF2 is a member of the rhomboid protein family found in the ER. iRHOM2 is considered an essential regulator for the crosstalk TNF/TACE as it helps TACE to get the plasma membrane [27]. Data indicate that Ubiquitin (Ub)-conjugating enzyme variant 1A (Uev1A) polyubiquitinates iRHOM2 promoting TACE maturation ( Figure 1, dotted purple line). However, the role of iRHOM2 is not limited to induce TACE maturation. The phosphorylation of the iRHOM2 cytoplasmic tail by MAP kinases (p38, JNK and ERK1/2) is a crucial step to expose TACE proteolytic site activating its sheddase function [14,28] (Figure 1, purple box). Tumor Necrosis Factor Receptors Two receptors have been identified to mediate interactions with TNF, tumor necrosis factor receptor 1 (TNFR1), also called CD120a and p55 (its molecular weight is 55 kDa), and tumor necrosis factor receptor 2 (TNFR2), also called CD120b and p75 (its molecular weight is 75 kDa) [29]. TNFR1 and TNFR2 are not specific to TNF, they interact also with lymphotoxin alpha (LTα, previously known as TNFβ). LTα is a cytokine closely related to TNF, activated by similar stimuli than those activating TNF, produced mainly by lymphoid cells in a soluble form and can combine with LTβ interacting with another different receptor, LTβR [30]. TNFR1 and TNFR2 are on the cellular membrane or in a soluble form following TACE activation; their cytoplasmic domains are unrelated, and intracellular signalling pathways are independent. TNFR1 is involved in cytotoxicity, whereas TNFR2 plays a role in cytotoxicity and proliferation [31,32]. As described in the previous section, the exact mechanism involving TACE in the shedding of TNFR1 and TNFR2 is still unclear. Tumor Necrosis Factor Receptor 1 TNFR1 contains a cytoplasmic region designed death domain (DD), which initiates a signal of cytotoxicity with homology to the intracellular domain of Fas antigen [33]. TNFR1 activates different signalling pathways, including neutrophil migration, complement pathway, regulation of other cytokines and chemokines, adhesion molecules and their receptors, generally promoting inflammatory responses [34]. Several molecules were described as essential for NF-κB activation through a TNFR1-dependent pathway, for example, TRADD (TNFR1-associated death domain), RIPK1 (receptor-interacting protein kinase 1, also known as RIP1), TRAF2 (TNFR-associated factor 2) and FADD (Fas-associated death domain) [35]. The TNF/tmTNFR1 complex induces activation signalling pathways leading to opposite effects such as cell survival and cell death. Once TNF binds tmTNFR1, TRADD, RIPK1, TRAF2 and TAK1 molecules are recruited near DD and complex I. This complex I mediates the activation of MAPK and NF-κB, promoting cell survival. To activate this pathway, the phosphorylation of Jak-(Janus kinase)1 and Jak2, STAT-(signal transducer and activator of transcription)3 and STAT5 are required [36,37]. RIPK1 (RIPK1 U ) is ubiquitinated to recruit Ikappa B kinase (IKK), and then there is a binding between RIPK1 U /NEMO (regulatory subunit IKKγ) to finally activate NF-κB [38] (Figure 2, left). However, if TRADD, RIPK1 and TRAF2 are dissociated from DD, they can interact with FADD, assembling the complex II that activates Caspase-8, inducing cell death. It has been suggested that to induce a full caspase 8-activation, ROS generation is required upstream and downstream of complex II [37,39] (Figure 2, left). described as essential for NF-κB activation through a TNFR1-dependent pathway, for example, TRADD (TNFR1-associated death domain), RIPK1 (receptor-interacting protein kinase 1, also known as RIP1), TRAF2 (TNFR-associated factor 2) and FADD (Fas-associated death domain) [35]. The TNF/tmTNFR1 complex induces activation signalling pathways leading to opposite effects such as cell survival and cell death. Once TNF binds tmTNFR1, TRADD, RIPK1, TRAF2 and TAK1 molecules are recruited near DD and complex I. This complex I mediates the activation of MAPK and NF-κB, promoting cell survival. To activate this pathway, the phosphorylation of Jak-(Janus kinase)1 and Jak2, STAT-(signal transducer and activator of transcription)3 and STAT5 are required [36,37]. RIPK1 (RIPK1 U ) is ubiquitinated to recruit Ikappa B kinase (IKK), and then there is a binding between RIPK1 U /NEMO (regulatory subunit IKKγ) to finally activate NF-κB [38] (Figure 2, left). However, if TRADD, RIPK1 and TRAF2 are dissociated from DD, they can interact with FADD, assembling the complex II that activates Caspase-8, inducing cell death. It has been suggested that to induce a full caspase 8-activation, ROS generation is required upstream and downstream of complex II [37,39] (Figure 2, left). The interactions of solTNF or tmTNF with tmTNFR1 are also crucial to trigger apoptotic signals. It has been recently shown that tmTNF induces the binding of STAT1 to a region spanning amino acids 319-337 of tmTNFR1. STAT1-phosphorylation (at the serine residue in position 727) favors its binding to TRADD and FADD, promoting apoptosis but not NF-κB activation [40]. Thus, the regulatory balance between survival versus death by The interactions of solTNF or tmTNF with tmTNFR1 are also crucial to trigger apoptotic signals. It has been recently shown that tmTNF induces the binding of STAT1 to a region spanning amino acids 319-337 of tmTNFR1. STAT1-phosphorylation (at the serine residue in position 727) favors its binding to TRADD and FADD, promoting apoptosis but not NF-κB activation [40]. Thus, the regulatory balance between survival versus death by the TNFR1 pathway is crucial to maintain homeostasis, and several authors suggested that TNFR1-mediated-signal transduction includes a checkpoint resulting in cell death when the signal activating NF-κB fails. Although it is not yet clarified how this checkpoint is controlled, experimental evidence suggested that RIPK1 is a crucial target to regulate this balance. It was reported that IKKα/IKKβ mediates direct phosphorylation of RIPK1 as the last step regulating cell death [41]. Toso, also known as FAIM3 (Fas apoptosis inhibitory molecule 3) or FcµR, is a transmembrane protein with a negative regulatory function, which promotes the ubiquitination of RIP1 inducing NF-κB activation and consequently cell survival [42], affecting caspase 8 activation [43]. Another molecule that has been involved in cell death is MAPKAP kinase-2 (MK2) [44]. If the effector MK2 induces phosphorylation of the kinase RIPK1, the binding with FADD/Caspase 8 is inhibited, thus complex-II-dependent cell death is blocked [44,45]. TNFR1-mediated cell survival may depend on ubiquitination and phosphorylation of RIPK1 (Figure 2, left). Additionally, TNF/tmTNFR1 pathway can induce cell death called necroptosis as the assembly of FADD, RIPK1, and RIPK3 form the complex called necrosome [46]. Roca and collaborators [47], have shown in zebrafish or in human macrophages (infected with Mycobacterium marinum or Mycobacterium tuberculosis) that an excess of TNF triggers necrosis through TNF-RIPK1-RIPK3 interactions increasing the production of ROS, cyclophilin D (mitochondrial matrix protein), BID and BAX (pro-apoptotic proteins) [48]. The same group has recently reported that TNF triggers the production of ROS activating cyclophilin D (mitochondrial matrix protein) and leading to BID and BAX (pro-apoptotic proteins) activation, which results in mitochondrial Ca 2+ overload through ER ryanodine receptor and necrosis [47]. The Ubiquitin-binding protein ABIN-1 is critical for the activation of RIPK1 [49]. This protein is recruited into the complex I and plays a critical role in the control of ubiquitylation and deubiquitylation. It has been proposed that ABIN-1 deficiency reduces the recruitment of A20 molecule (negative regulator of NFκB), and consequently, there are ubiquitylation and activation of RIPK1 to mediate necroptosis [49] (Figure 2, left). Tumor Necrosis Factor Receptor 2 Although the tmTNFR2 signalling pathway has been mainly implicated in cell proliferation, activation and survival, data have also reported to transduce apoptotic signal under specific models and potentiate TNFR1-induced cell death [50,51]. Two molecules with the ability to interact with the TNFR2 cytoplasmic domain and to induce signalling were reported and called TNF receptor-associated factor 1 (TRAF1) and TRAF2 [52]. TNFR2 signalling was first simplified as a pathway where TRAF2 induces c-Jun-N-terminal kinase (JNK) activation, using the apoptosis signal-regulating kinase 1 (ASK1) as a mediator to activate NF-κB and facilitating anti-apoptotic signals [53]. JNK plays a fundamental role to determine the outcome of the TNFR2 pathway. It has been shown that after TNF/tmTNFR2 engagement, TRAF2 and the inhibitor of apoptosis molecules called cIAP1 (molecule to mediate the TRAF2 ubiquitination) are recruited to the TNFR2 cytoplasmic domain, and later it is translocated to an ER-associated compartment, where TRAF2 ubiquitination occurs [54,55] (Figure 2, right). The ER is sensitive to homeostasis alterations favoring the accumulation of misfolded proteins, which trigger ER stress and promote apoptotic cell death [56]. Thus, after ER-stress, TRAF2 forms a complex with the ER-stress sensor called IRE1 and interacts with procaspase-12 promoting its activation [57]. It has also been proposed that ER-stress induces expression of TNF in an IRE1-and NF-κB-dependent manner, and TRAF2 is decreased; these together inhibit the JNK activation and makes cells susceptible to cell death [58]. Those data support evidence for a relevant role of the ER in an additional apoptotic control point of the TNFR2 pathway. It has also been proposed that following tmTNFR2 activation, the E3 ubiquitin ligase Smurf2 forms a ternary complex with tmTNFR2 and TRAF2, inducing relocalization of TNFR2 to the insoluble membrane/cytoskeletal fraction promoting the JNK activation [59]. Mixed lineage kinase 3 (MLK3), a mitogen-activated protein kinase kinase kinase (MAP3K) required for optimal activation of JNK signalling, has been shown to associate with TRAF2, TRAF5 and TRAF6. However, only TRAF2 induces the kinase activity of MLK3 by conjugating with polyubiquitin chains to activate JNK [60,61]. Finally, there is controversial evidence about the role of TNF on mitochondrial integrity. Reports have shown that TNF causes mitochondrial fragmentation (fission), but other authors suggested that TNF stimulates mitochondrial biogenesis [62,63]. In human airway smooth muscle (hASM) non-asthmatic cells, fission was associated with an increased level of Drp1 (Dynamin related protein (1) and the decreased level of Mfn2 (GTPase Mitofusin (2), and TNF was involved in Mfn2 reduction [64]. Recently, it was reported that hASM cells exposed to TNF showed fission and mitochondrial biogenesis, with increased organelle volume density, cell proliferation, reduction of Ca 2+ influx, and decrease of O2 consumption per mitochondrion [65]. In this regard, the TNFR2-activation mediates interactions between Stat3 (signal transducer and activator of transcription (3) and Re1A (Stat-Re1A protein) as the Stat3/Re1A complex interacts with two lysine residues within Stat3 that is acetylated by p300 and OPA1 (optic atrophy 1) expression is increased, suggesting an enhanced mitochondrial fusion [66]. In summary, the tmTNFR2 pathway is tightly regulated depending on different cellular organelles, which are alternative for specific cellular systems when TNF is produced in inappropriate concentration. These mechanisms can help to develop novel therapeutic targets for treating or preventing diseases where TNF expression is dysregulated. tmTNF Signalling Pathway As previously discussed, tmTNF is a stable homotrimer that is cleaved by TACE and released as solTNF. The first publications on tmTNF suggested that tmTNF interacts primarily with TNFR2, whereas solTNF binds mainly to TNFR1 [67,68]. However, we need to consider that the tmTNF form that showed the specific interaction with TNFR2 are artificial because that tmTNF form contains mutations, which are not found in the native tmTNF molecule, and it is identical to solTNF. Also, tmTNF mutations to be retained on the cell membrane can differ, resulting in different outcome after an infection in vivo and in vitro [30,69]. At present, no data have been reported, excluding the interaction of TNFR1 with tmTNF. Many data have reported on TNFR1-tmTNF interactions in vivo and in vitro and, in particular, in the context of mycobacterial infections using tmTNF mutant mice that will be treated in this review. tmTNF is a molecule that induces crosstalk between tmTNF-bearing cells and tmTNFRbearing cells signalling both as a ligand and as a receptor. This means that tmTNF not only mediates the forward signal to the target cell but also mediates the reverse signalling inside the tmTNF-bearing cell [69] (Figure 3A). Evidence suggests that although solTNF and tmTNF mediate cytotoxicity, tmTNF can exert apoptosis on solTNF-resistant cells. Using a model of TNF-resistant cells (the HL-60 cell line), it has been reported that tmTNF promotes the interaction with TRAF1 and TRAF2. TRAF1 plays a suppressor role in blocking the translocation of TRAF2 from the cytoplasm to the cell membrane. Consequently, NF-κB activation is inhibited resulting in cell death ( Figure 3B) [70]. The same group has also proposed that forward signalling can result in opposite activities since tmTNF acting as a receptor promotes NF-κB activation, whereas acting as a ligand inhibits NF-κB activity [71]. It has been reported that MCF-7 human tumor cells with a high expression level of tmTNF are resistant to cell death associated with solTNF and constitutive NF-κB activation. tmTNF contains a leader sequence (LS) in the cytoplasmic segment (76 amino acid residues) through which tmTNF is anchored into the membrane. LS appears to affect both forward and reverse signalling, and it seems that LS induces directly the constitutive activation of NF-κB [72,73] (Figure 3C). Activation-induced cell death (AICD) plays a role in regulating peripheral immune tolerance by deleting overactivated or autoreactive T cells. It has been described that NF-κB is required to mediate the expression of the pro-apoptotic molecule called Fas ligand (FasL or CD95L) inducing AICD [83]. It was recently reported that when tmTNF functions as a receptor, using an anti-TNF polyclonal antibody to trigger reverse signalling, tmTNF can upregulate FasL expression and, consequently, increase AICD. Moreover, tmTNF-dependent reverse signalling also significantly increases several ligands, including TNFRs and Fas ( Figure 3E) [84]. Using a pleural cell model, we have previously shown that tmTNF but not solTNF controls the expression of TNFR2 on myeloid cells expressing tmTNF, TNFR1 also contributes but in a minor way. Besides, the inflammatory process of BCG-induced pleurisy was downregulated mainly by tmTNF and TNFR2 [74]. Recent studies have shown that tmTNF efficiently activates both TNFR1 and TNFR2, but solTNF interacts and activates TNFR1 [75,76]. In several liver injury models, it has been shown that only solTNF causes liver toxicity but not tmTNF which can protect the host against mycobacterial infections [77,78]. Recently, we reported that TNFR1 is necessary to recruit myeloid cells, while TNFR2 is implicated in cell activation after BCG-induced pleural infection [78,79]. The actin cytoskeleton plays an essential role in different functions of the cell, for instance, intracellular trafficking and cellular contractility, but also it has been recognized that the actin's dynamic structure is involved in apoptosis and necrosis [80]. The actin's dynamic is another molecular mechanism by which the signalling activated can differ between both forms of TNF. Data have shown that solTNF induces actin depolymerization and morphological changes through ERK activation and p38 MAPK inducing cell death [81]. In contrast, tmTNF does not affect the state of actin microfilaments; apparently, actin is involved in tmTNF-mediated signal transduction by uncoupling TRAF2 and cFLIP from TNFR2 and consequently activating caspase-8 to induce apoptosis and inhibiting NF-κB activation ( Figure 3D) [82]. Activation-induced cell death (AICD) plays a role in regulating peripheral immune tolerance by deleting overactivated or autoreactive T cells. It has been described that NF-κB is required to mediate the expression of the pro-apoptotic molecule called Fas ligand (FasL or CD95L) inducing AICD [83]. It was recently reported that when tmTNF functions as a receptor, using an anti-TNF polyclonal antibody to trigger reverse signalling, tmTNF can upregulate FasL expression and, consequently, increase AICD. Moreover, tmTNFdependent reverse signalling also significantly increases several ligands, including TNFRs and Fas ( Figure 3E) [84]. TNF/TNFR1/TNFR2 Inhibitors Diverse proposals of anti-TNF therapies targeting the cytokine or the receptors are underway [85,86]. Although the approved TNF therapies present several side effects, these are efficient treatment options for controlling inflammatory diseases [87]. Both tmTNF and solTNF are biologically active, and the balance between the two forms is influenced by the cell type and its activation state. The long-term use of various TNF pathway inhibitors, like monoclonal antibodies, to treat diseases such as rheumatoid arthritis or Crohn's disease can modify the regulatory activity of T cells and lead to an increased susceptibility to bacterial infection. Mycobacterial infections are mainly due to Mycobacterium tuberculosis (M. tuberculosis) and Mycobacterium bovis (M. bovis), and also, there are unwanted effects such as sepsis autoimmunity and neurodegeneration. Among the most used inhibitors are Etanercept, a TNFR2 dimeric fusion protein, followed by monoclonal antibodies Adalimumab and Infliximab. Their therapeutic efficacy is dependent on the interaction of the Fc region of the anti-TNF IgG and the FcγR expressed on the cell surface. Moreover, these antibodies have been related to a specific risk of extrapulmonary and disseminated infections [88]. New therapeutic strategies are proposed to target the TNF axis with a minimum risk of M. tuberculosis reactivation. For instance, the new design of a hypo-fucosylated form of Adalimumab, which was found to have better healing properties due to high affinity for FcγRIII and induction of CD206+ macrophages without apparent adverse effects. However, validation in large cohorts of patients to verify the clinical response is required [89]. Sultana and Bishayi have used to neutralize or selectively inhibit TNFR1 or TNFR2 in a model of Staphylococcus aureus-induced septic arthritis; authors showed that the levels of pro and antiinflammatory cytokines were modulated via the NF-kB and JNK signalling, which favored an increase of iNOS and RANKL and reduced recruitment of phagocytes at the site of inflammation, and subsequently decreased the generation of ROS and septic arthritis [90]. Additional studies suggest that the combination of conformational analysis and molecular coupling studies for a cyclic peptide inhibitor of human TNF and TNFR1 could result in a rational design of a new moderate inhibitor of TNF-TNFR1 interaction. The combination of these cyclic peptides was shown to improve severity in rat colitis models, depending on the use of TNF-binding cyclic peptide (TBCP) or TNFR1-binding (TRBCP) [91]. Molecules such as progranulin (PGRN), a secretory growth factor that binds directly to TNFR1 and TNFR2, an endogenous TNF antagonist, induces T regulatory cells increasing IL-10 production, activates ERK2 pathway and suppresses the stimulation of IL-1β and TLR4 by binding to TNFR1 [92]. Dominant-negative TNF molecules selectively blocking solTNF but not tmTNF molecules were designed and reported to prevent inflammatory process mainly mediated by solTNF [93]. These selective inhibitors of solTNF were shown to protect mice from acute liver injury while preserving the activity of tmTNF required for host defence mechanisms against mycobacterial infections. In contrast, non-selective inhibitors of both solTNF and tmTNF, such as Etanercept, suppressed immunity to mycobacterial infections [94,95]. Spohn et al. [96] reported virus-like particle-based vaccine selectively targeting solTNF by generating anti-TNF antibodies which protected mice from arthritis without affecting reactivation of latent tuberculosis [96]. In this regard, Zhang et al. [97] designed a TNF epitope scaffold immunogen, the DTNF7 vaccine using the diphtheria toxin transmembrane domain (referred to as DTT as scaffold). The grafted TNF epitope is wholly exposed to the surface and presents in a native conformation, while the rigid helical structure of DTT is minimally disturbed, which induces a sustained antibody response since the immunogen is highly stable. This TNF epitope-scaffold immunogen induced sustained antibody responses in a mouse model of collagen-induced arthritis [98]. These authors proposed a selective modulation of TNFR1 and TNFR2 may represent advantages over TNF general inhibition because it allows the preservation of the non-target receptor's function and reduces the side effects [97]. These results could represent a potential application to reduce the risk of latent tuberculosis (TB) activation. A summary of TNFR1 and TNFR2 modulators is presented in Table 1. Other efforts have focused on designing and developing small selective inhibitors of the TNF converting enzyme (TACE). Zinc Binding Groups (ZBG) have been considered to play an essential role in determining the efficacy, selectivity and TACE Inhibitor Toxicity. Some of the TACE inhibitors already developed have been used in topical treatments for dermatological conditions such as psoriasis and acne, limiting their systemic toxicity. Research is currently underway to design monoclonal antibody inhibitors from TACE [99]. The use of new biological is currently limited, and it is due mainly to the high cost of clinical studies and potential risk of side effects in human. Animal models for preclinical studies have been developed, for example mice expressing human TNF molecules to overcome the limitation of analyzing a human TNF inhibitor in mouse [100]. However, efforts must be conducted to elucidate the reliability of novel drugs, models of study, characterization of signalling pathways in the context of mycobacterial infection the side effects as reactivation of latent infection. GSK1995057 Antibody [110] Atrosimab Atrosab monovalent form [111] Zafirlukast, DS42 Small molecule [112] TNFR2 Antagonist Ab Antibodies [113,114] Contribution of tmTNF in the Control of Mycobacterial Infections Tuberculosis (TB) is an infectious disease, which is the leading cause of death worldwide from a single infectious agent; the World Health Organization has estimated that in 2019 worldwide, 10 million people fell ill with TB (range, 8.9-11.0 million) and 1.2 million deaths [115]. The causative agent of TB is the bacillus M. tuberculosis; the current vaccine used against TB is the Bacillus Calmette-Guérin (BCG), which is derived from another mycobacterium called M. bovis. BCG vaccine provides adequate protection against childhood TB, but the level of protection against adult pulmonary TB can be variable [116]. TNF plays a critical role in host defense mechanisms against mycobacterial infections as shown by many research groups using different models of TNF and TNFRs deficient mice [30]. In vitro, infected macrophages produce TNF, and the amount is related to bacterial virulence and infection doses. Even with very low doses of BCG, macrophages can produce TNF [117]. Experimental data have shown that the absence of TNF or TNF receptors correlates with an exacerbated inflammatory process and an impaired bacterial clearance that culminates with disseminated mycobacterial infection and host' death [75]. It has been recently reported that M. tuberculosis DNA (MtbDNA) is recognized by murine macrophages, which leads to autophagy induction, TLR-9 expression, and considerable TNF production. Interestingly, only M1 macrophages were fully responsive to MtbDNA [118]. Several reports suggest a modulation in macrophages during M. tuberculosis infection where TNF, TLRs and autophagy are involved [119,120]. Most TNF activities have been attributed to solTNF form. At present, the contribution of tmTNF in protective immunity against mycobacterial infections has been also well analyzed by using mutant mice that only express tmTNF and not solTNF. However, as previously discussed, these tmTNF are modified molecules that are retained at the cell membrane representing a model system but not the native tmTNF molecule. In 2002, Olleros et al. [121] reported that tmTNF might act as a receptor upon binding of soluble or membrane TNFRs to develop efficient bactericidal mechanisms. Transgenic mice expressing only tmTNF but not solTNF and LTα were able to activate an efficient immune response against BCG and acute M. tuberculosis. tmTNF was sufficient to sustain the cellular activation and to reduce bacterial BCG load by inducing the granuloma formation, and IFN-γ expression although the amounts were lower than those induced when solTNF was expressed [121]. Using these mutant mice, tmTNF expression was also associated with an efficient granuloma formation with activation of iNOS as well as the induction of local and systemic Th1-type cytokines such as IFN-γ and chemokines such as MCP-1 ( Figure 4A). However, these mice expressing tmTNF but not solTNF and LTα were able to survive to BCG and acute M. tuberculosis infection but not to chronic M. tuberculosis infection as these mice developed an exacerbated inflammation [122]. Several questions about tmTNF signalling are still open because experimental data can be influenced by diverse factors such as the nature of mutations generated on the tmTNF molecule and its impact in the interaction with TNFRs as well as regulatory mechanisms involving TNFRs and their soluble forms. The impact of the different mutations on the tmTNF molecule has been analyzed at the level of host defense mechanisms against mycobacteria. Indeed, two mouse models of tmTNF KI mice were compared by infecting with a high dose of BCG. tmTNF KI mice with the deletion tmTNFΔ1-9, K11E were able to establish an immune response similar In a different mouse model of tmTNF knocking (KI) mice, Saunders et al. [123] showed that mice expressing stable tmTNF (TNF-membrane bound without TACE cleavage site), but not solTNF, were able to contain bacterial growth for over 16 weeks and developed antigen-specific T cell response with compact granulomas as wild-type (WT) mice. This work reported that during the acute-phase infection (first infection 12 weeks), tmTNF mice responded by producing IFN-γ mRNA and chemokines such as CXCL10, CCL5 and CCL7 contributing to T cell migration and granuloma formation ( Figure 4B up). However, they succumbed to M. tuberculosis infection around 170 days post-infection contrary to WT, which survive to 300 days, confirming that tmTNF is sufficient to control acute, but not chronic infection. Using the same tmTNF KI mice in a mouse model of M. tuberculosis reactivation, it has been reported that tmTNF is not sufficient to support an efficient immunity, although confers protection against acute M. tuberculosis infection. However, the long-term protection requires solTNF in order to decrease inflammatory response and to contain M. tuberculosis reactivation [123,124]. Moreover, the expression of tmTNF only in T cells (tmTNF-T cells) was shown to be sufficient to confer protection against M. tuberculosis infection, but was not associated with a reduction in bacterial load [123] (Figure 4B below). Several questions about tmTNF signalling are still open because experimental data can be influenced by diverse factors such as the nature of mutations generated on the tmTNF molecule and its impact in the interaction with TNFRs as well as regulatory mechanisms involving TNFRs and their soluble forms. The impact of the different mutations on the tmTNF molecule has been analyzed at the level of host defense mechanisms against mycobacteria. Indeed, two mouse models of tmTNF KI mice were compared by infecting with a high dose of BCG. tmTNF KI mice with the deletion tmTNF∆1-9, K11E were able to establish an immune response similar to WT, in contrast, mice with the deletion tmTNF∆1-12 were highly sensitive to the infection [69]. The authors showed that the difference between the two tmTNF KI mice are at the level of Th1 type immune responses and iNOS activation that tmTNF∆1-9, K11E develop but not tmTNF∆1-12 KI mice. Furthermore, the frequency of CD11b+ cells expressing TNFR2 were lower in highly sensitive mice, these authors proposed that that interaction of tmTNF∆1-12 and TNFR2 could be deficient in contrast to resistant mice [69]. This study also clarified that cellular immune activation can be attributed to reverse signalling of tmTNF characterized by NF-κB activation, NO production and the delivery of IL-6 and RANTES, suggesting that precise regulation of tmTNF activity can be orchestrated by binding with tmTNFR2 or solTNFR2, and an unbalance of this regulatory and complex system can modify the outcome of the disease [69] (Figure 4C). Using a model of pleural tuberculosis, this group has reported that tmTNF but not solTNF regulates the expression of tmTNFR2 on myeloid cells [74]. An interesting report by Keeton and colleagues have shown that soluble or transmembrane forms of TNFRs mediate different activities of TNF. Indeed, solTNFR2 reduces bioactive TNF concentrations through downmodulation of dendritic cell activation, affecting several TNF-dependent functions as cytokines synthesis, whereas tmTNFR2 promotes immune protection [125]. Myeloid-derived suppressor cells (MDSC) have been described as natural suppressor cells inhibiting T-helper lymphocytes' proliferative response. A study has reported a new mechanism to explain the role played by tmTNF in the control of BCG infection. BCG-infection induces MDSC accumulation, and tmTNF expression on MDSC is crucial to activate their suppressive function. Chavez-Galan et al. [126] suggested that tmTNF on MDSC interacts specifically with TNFR2 on CD4 T cells ( Figure 4D). The suppressive activity of MDSC attenuates the excessive inflammation associated with mycobacterial infection [125]. A limitation of this study is that the intracellular pathways are not yet clarified. Using a tumor model, it has been reported, that tmTNF on MDSC activated MDSC upregulating arginase-1 and iNOS transcription to promote NO, IL-10, and TGF-β secretion and enhancing the inhibition of lymphocyte proliferation [127]. It is possible that MDSC in a BCG-infection model also can upregulate the same molecules, but the exact pathway is still unclear. TNF Apoptosis Inhibition in Macrophage-Mycobacterial Infection The apoptotic process in M. tuberculosis infected macrophages could be initiated by TNF-TNFR1 interaction with TRADD (TNFR-associated death domain) and FADD (Fas-associated death domain) association and subsequent aggregation of death effector domains (DEDs), procaspase-8 activation and the DISC (death-inducing signalling complex) assembly. Finally, effector caspases are activated. TNF-induced apoptosis through molecules receptor-associated that impact caspase activation, resistance to cell death may be influenced by NF-kB [128]. Virulence of mycobacteria is strongly associated with cell death evasion during infection. Virulent strains like M. tuberculosis induce a miR-30A overexpression in infected macrophages, inhibiting the autophagy and negatively impacting the immune control [129]. A particular inverse correlation has been found between mycobacteria virulence and apoptosis induction [130]. In this regard, recently it has been reported that TNFR2 increases microbicidal activity against M. tuberculosis independently of IFNγ and nitric oxide, and it displays an inverse correlation with macrophages apoptosis, but this apoptosis is not observed under BCG infection, suggesting that regulation of apoptosis and mycobacterial replication by TNFR2 is a virulence dependent pathway [131]. However, beyond virulence, several factors are involved with the induction or inhibition of apoptosis during mycobacterial infection, for instance, strain phenotype, stage of infection, and cell condition may interfere. Other mechanisms have been observed, such as soluble TNFR2 secretion for blocking TNF activities including apoptosis and reducing Fas receptors expression affecting Fas-ligand cell death [132,133]. In an infection model with M. tuberculosis H37Rv in U937 and THP-1 cell lines and in human monocyte-derived macrophages, apoptosis inhibition involves the extrinsic apoptosis pathway, but it does not interfere with the mitochondrial apoptosis pathway [134]. Two proteins, Rv3654c and Rv3655, secreted by M. tuberculosis into the cytoplasm were identified, as responsible for apoptosis suppression. Rv3654c cleaves PFS (polypyrimidine tract binding Protein-associated Splicing Factor), which decreases the expression of caspase-8 [134]. In another way, Rv3655c is associated with high expression of ALO17, a protein commonly associated with ALK (anaplastic lymphoma kinase). Associated proteins have been involved in anti-apoptotic events, in particular, in the inhibition of PI3/Akt and caspases [135]. However, the role that the intrinsic pathway may play during M. tuberculosis infection is not clear. It has been suggested that mitochondrial apoptosis pathway may be a strategy to induce necrosis in infected macrophages during a well-defined stage of the infection [134,136]. A different mechanism has been proposed by Miller et al. NuoG a subunit of NDH-1 (type I NADH dehydrogenase) is involved in neutralization of NOX2-derived reactive oxygen species (ROS), promoting inhibition of TNF-induced apoptosis [137]. If ROS is increased in the phagosome, apoptosis in the macrophage can be activated. Indeed, several links between apoptosis and TNF activated pathway have been described affecting caspase-8 activation using kinases ASK1m p38 and c-Abl, promoting FLIPS degradation by the proteasome and finally activating procaspase-8 [138]. Mycobacterial apoptogenic moieties have been reported, particularly PstS-1, a 38-kDa lipoprotein of M. smegmatis, involving TNF and FasL activation, which upregulates TNFR1, TNFR2 and Fas. In this process, TLR2 was involved as well as in the activation of caspase-8, caspase-9 and caspase-3 [139]. In summary, a variety of apoptotic ways of activation may depend on TNF; however, particular conditions of mycobacterial infection may contribute to inhibition or efficiently programmed cell death of the infected cell. Conclusions TNF is a key regulatory cytokine that plays an important role in the innate and adaptive immune responses during mycobacterial infections. However, several mechanisms can influence the course of the infection. The first interactions of TNF and TNF receptors after mycobacterial phagocytosis may play a crucial role inside a specific environment. It is also of interest to consider the balance between the two TNF forms, tmTNF, and solTNF and cells responding to these TNF forms that can influence the outcome of mycobacterial infections. Recent evidence has pointed out the relevance of tmTNF-mediated reverse signalling during mycobacterial infections. It is of interest that this pathway not only favors proinflammatory responses, as discussed in this review, but tmTNF attenuates Th1 cell-mediated inflammatory responses and mediates cell recruitment. These are new insights elucidating tmTNF/TNFRs axis that can be considered as target to control mycobacterial infections, mainly focusing on TB, which still as a major public health worldwide. Moreover, the selective inhibition of solTNF is another important target because it improves survival in an acute infection, nevertheless diverse signalling mechanisms for both forms of TNF, TNFR1 and TNFR2 receptors remain to be elucidated. More research on the biology of TNF is required to design improved therapies that alleviate inflammatory diseases while maintaining protection against mycobacteria. Furthermore, this will lead to personalized treatment as a promise to improve the clinical outcome of patients suffering from mycobacterial infections. Conflicts of Interest: The authors declare no conflict of interest.
2021-06-03T06:17:18.722Z
2021-05-22T00:00:00.000
{ "year": 2021, "sha1": "97ba3457a83adfbaded97e1d6c40296a78f41f14", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/11/5461/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d57677ca344cb68b3e26e2b6c8de1f9e80530ec6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247384486
pes2o/s2orc
v3-fos-license
Yoga Prevents Gray Matter Atrophy in Women at Risk for Alzheimer’s Disease: A Randomized Controlled Trial Background: Female sex, subjective cognitive decline (SCD), and cardiovascular risk factors (CVRFs) are known risk factors for developing Alzheimer’s disease (AD). We previously demonstrated that yoga improved depression, resilience, memory and executive functions, increased hippocampal choline concentrations, and modulated brain connectivity in older adults with mild cognitive impairment. Objective: In this study (NCT03503669), we investigated brain gray matter volume (GMV) changes in older women with SCD and CVRFs following three months of yoga compared to memory enhancement training (MET). Methods: Eleven women (mean age = 61.45, SD = 6.58) with CVRF and SCD completed twelve weeks of Kundalini Yoga and Kirtan Kriya (KY + KK) while eleven women (mean age = 64.55, SD = 6.41) underwent MET. Anxiety, resilience, stress, and depression were assessed at baseline and 12 weeks, as were T1-weighted MRI scans (Siemens 3T Prisma scanner). We used Freesurfer 6.0 and tested group differences in GMV change, applying Monte-Carlo simulations with alpha = 0.05. Region-of-interest analysis was performed for hippocampus and amygdala. Results: Compared to KY + KK, MET showed reductions in GMV in left prefrontal, pre- and post-central, supramarginal, superior temporal and pericalcarine cortices, right paracentral, postcentral, superior and inferior parietal cortices, the banks of the superior temporal sulcus, and the pars opercularis. Right hippocampal volume increased after yoga but did not survive corrections. Conclusion: Yoga training may offer neuroprotective effects compared to MET in preventing neurodegenerative changes and cognitive decline, even over short time intervals. Future analyses will address changes in functional connectivity in both groups. INTRODUCTION With the population aging rapidly, the global prevalence of dementia is expected to double every twenty years, with an estimated 35.6 million in 2010, 65.7 million in 2030 and a rapid rise to 115.4 million in 2050 [1]. Alzheimer's disease (AD) is an irreversible neurodegenerative disease, and therefore, early detection and preventive intervention are critical to delaying the onset of symptoms [2]. While the AD risk gene APOE E4 is hardwired and unchangeable, focusing on modifiable risk factors promises to delay AD onset [3,4]. Female sex, cardiovascular risk factors (CVRFs), and subjective cognitive decline (SCD) are known risk factors for developing AD [5][6][7][8][9][10]. CVRFs include older age, high blood pressure or hypertension, diabetes, high cholesterol, obesity, and a family history of cardiovascular disease [5][6][7][8][9][10]. While some of these CVRFs are modifiable (e.g., body weight, cholesterol, high blood pressure), others are not (sex, age, family history). Early prevention techniques are most urgently needed in high-risk groups. It has been argued that while objectively measured cognitive impairment using standardized neuropsychological assessments is not detectable in preclinical stages of AD, subjective cognitive decline, and neuroimaging-based biomarkers are already detectable [8]. Brain gray matter volume (GMV) atrophy, particularly in the hippocampus, responsible for memory function, can be observed decades before the onset of cognitive problems, and can serve as an early biomarker for AD [11]. Yoga has documented beneficial effects on cardiovascular functioning and is a promising intervention for stress-reduction, i.e., yoga is ideal to improve some of the modifiable AD risk factors [12]. Yoga has recently been identified as a safe practice with positive effects on cognitive functions in healthy elderly [13], older adults with mild cognitive impairment (MCI), and those with early stages of dementia [14][15][16]. Yoga was found to improve both mental and physical health in older adults with dementia in long-term care facilities, including blood pressure, breathing rate, cardiorespiratory fitness, body flexibility, balance, joint movement and muscle strength, and endurance [17]. Reports in the general population also show that the motivations to practice yoga include the achievement of better wellbeing, disease prevention, the improvement of fitness, energy levels, physical and mental health, immune functioning, back pain, arthritis, anxiety, and depression [18,19]. While yoga commonly involves postures (asanas), focus on breath (pranayama), postural alignment, and movement, brief meditative practices can be helpful as well. For instance, in SCD, a yogic meditation named Kirtan Kriya (KK) improved perceived stress, psychological well-being, quality of life, and mood compared to a music listening control [20]. Yoga can be used to improve stress, memory, brain health, and subjective wellbeing in high-risk groups, as well as in the general population. Yoga demonstrated greater overall GMV in experienced yoga practitioners compared to nonpractitioners, with more years of yoga experience being associated with greater GMV in left insula, frontal operculum, and orbitofrontal cortices [21]. In addition, the same study also found that hours of weekly practice were associated with GMV increases at the borders of primary somatosensory and superior parietal cortices, the precuneus and posterior cingulate junction, and in the hippocampus and primary visual cortex. The authors therefore suggested that yoga may have neuroprotective effects. Another study found that left hippocampal volume was greater in experienced yoga practitioners compared to ageand sex-matched yoga-naïve controls [22]. Greater GMV in frontal, limbic (including the hippocampus), temporal, occipital and cerebellar regions was also found in a group of yogic meditation practitioners compared to non-practitioner controls and was associated with cognitive performance [23]. Both cognitive performance and the number of years of past yoga practice were associated with greater GMV in various brain regions also in this study. In experienced yoga meditators, global GMV was greater compared to non-meditators, in addition to regional GMV in the right ventromedial orbitofrontal and ventrolateral prefrontal cortices, inferior temporal and parietal cortices, and bilateral insula cortex [24,25]. It was notable that non-meditators did not display greater GMV compared to meditators in any brain region. Four weeks of Sahaja yoga meditation training led to increases in GMV in a small cluster in the right inferior frontal gyrus, as well as improved well-being, fatigue and dissatisfaction compared to a waitlist control group [26]. A small uncontrolled sixmonth yoga trial in a group of seven older healthy volunteers reported increases in hippocampal volume, but not a control region in the occipital cortex [27]. Our own studies have previously demonstrated beneficial effects of Kundalini Yoga (KY) paired with a KK (KY + KK) homework practice on cognition, resilience, neurochemistry, and neuroplasticity in a randomized controlled trial (RCT) in older adults with MCI [28][29][30]. Specifically, KY + KK improved depression, resilience, and executive functions, and prevented decline in hippocampal choline concentrations [28,30]. Since then, multiple meta-analyses have confirmed that mind-body interventions can effectively enhance various cognitive domains and resilience in healthy older adults and those with MCI [31][32][33]. It is important to note that existing studies reporting GMV effects with yoga are based on small samples and often specific and homogenous populations. More systematic and larger-scale studies are therefore critical to further address potential neuroplastic effects over short periods of yoga training. KY is a form of yoga that combines movement and meditation with a greater focus on breathing intensive exercises and mental visualizations than other forms of yoga. Meditation practices can be silent or involve repetitive Sanskrit chants intended to bring peace to the mind. These meditative practices can additionally include mudras, continuous or changing positions of the hands and fingers. KY is likely to alleviate energetic blockages that may be equivalent to areas of chronic muscle tension, tightness or weakness, or any other physical discomforts. The increased intensity of breathing and mental engagement with less and more gentle movement makes this an ideal form of yoga to practice with older people, who often have mobility problems. While the evidence on the effect of yoga as a whole on cardiovascular health is abundant, the type of yoga, the duration and the control interventions vary greatly, according to a recent meta-analysis [12]. It was reported that half the included research articles did not report what form of yoga was used and only one study reported using KY. It is therefore critical to delineate the format and consistency of our program, including daily homework exercises and to study its effects systematically. It is further notable, that existing KY + KK studies are underpowered and few in number. It is therefore critical to conduct larger-scale systematic clinical research in both healthy and at-risk populations. To summarize, yoga practices offer a multitude of documented positive effects on mental and physical health, as well as neuroplastic effects in the brain in both healthy and cognitively impaired older adults. It has not been tested whether yoga can improve memory and biomarkers of brain aging in high-risk groups with SCD. We therefore investigated GMV, stress and memory changes in a group of older women with CVRFs and SCD undergoing cognitive assessments pre-and post-trial. Specifically, we tested group differences in GMV changes. As an additional exploratory analysis, we tested whether there were also group differences in change in anxiety, depression, resilience and memory. Screening Between May 2018 and February 2021, 100 women were screened and randomized for the current subpart of the parent study (NCT03503669), as intended. Only 26 women (N = 13 in each group; mean age = 62.8 years, SD = 6.43) were eligible or willing to undergo an MRI scan at baseline, and 22 also completed a second scan after completion of the trial (N = 11 per group, see Fig. 1). The screening was performed by a trained staff member to assess physical and cognitive problems. Eligibility criteria were 1) SCD by report that included subjective complaints of decline in cognitive and memory function from the past level of functioning compared to the previous year; 2) the presence of one or more CVRFs, including a ≥7.5 th percentile on the ASCVD risk calculator using the Cerebrovascular Risk Factor Prediction Chart and hematologic testing; 2.1) a history of myocardial infarction no less than 6 months prior; 2.2) a previous diagnosis of diabetes; 2.3) current pharmacological treatment for blood pressure (> 140/90); 2.4) current pharmacological treatment for hyperlipidemia (LDL > 160); 3) sufficient English proficiency to successfully comprehend the memory training; 4) sufficient mental capacity to provide informed consent. Exclusion criteria for the main trial included 1) a history of psychiatric conditions including psychosis, bipolar disorder, drug or alcohol dependence, or a neurological disorder; 2) surgery within the past three months or planned surgery within the next year, as well as unstable medical conditions; 3) disabilities, such as severe visual or hearing impairment that could prevent participation in the intervention; 4) insufficient English proficiency to comprehend the intervention instructions, materials and discussions; 5) a diagnosis of dementia (MMSE ≤23 or CDR ≥0.5, see below); 6) current participation in cognitive training in a therapeutic setting; 7) current treatment with psychoactive medication; 8) prior experience with KY or KK; 9) myocardial infarction within the past 6 months. Additionally, MRI exclusion criteria involved metallic implants, permanent makeup, tattoos in the head and neck region and claustrophobia. All participants provided written informed consent for participation in the trial as approved by the UCLA Institutional Review Board (IRB). This study was conducted in accordance with the Declaration of Helsinki of 1975. Cognitive assessment Participants completed the Clinical Dementia Rating Scale (CDR) [34], the Mini-Mental State Examination (MMSE) [35], and the Memory Function Questionnaire (MFQ) [36]. The former two were exclusively used to screen out potential participants with AD. Two-factor scores were used in the analysis from this latter 64-item questionnaire: frequency of forgetting (MFQ1) and seriousness of forgetting (MFQ2). All tests were administered both at baseline and 12-week follow-up. SCD was defined as the subjective experience of declining memory function, despite a normal range of memory function using neuropsychological measures (for a more detailed discussion, see [37]). Note that the primary outcome variable of this study was the Hopkins Verbal Learning Test total recall score. However, the parent study tested this memory score at baseline and 6 months and therefore, there was no score available at the 3month follow-up when we acquired MRI scans. The results will be reported in a separate report using the larger parent sample. Clinical assessment Anxiety, depression, stress, and resilience were selected as the main clinical outcomes for the current study and were measured at baseline and 12week follow-up using the Beck Depression Inventory (BDI) [38], Hamilton Anxiety Scale (HAMA) [39], the Connor-Davidson Resilience Scale (CDRISC) [40], and the Perceived Stress Scale (PSS) [41]. Side effects and adverse events were monitored using the UKU Side Effect Rating Scale [42]. Yoga intervention The KY + KK intervention consisted of weekly, 60-min in-person lessons with a certified KY instructor with 6-10 participants per class. Each class followed the same structure: 1) tuning in (5 min); 2) warm up (15 min); 3) breathing techniques "Pranayama" (15 min); 4) KK (12 min); 5) final resting pose "Savasana" (10 min) and closing (3 min). In addition, each participant received a CD containing a 12-min KK recording with gentle background music and guidance for the exercise sequence. Participants performed this exercise at home every day. They were instructed to chant along with their eyes closed in a seated position, the feet flat on the floor (i.e., relaxed with a straight spine), to visualize a beam of white light entering the center of the top of the head and exiting the middle of the forehead, which is spiritually considered the third eye. While chanting, the thumb of each hand would touch the other fingers sequentially ("mudras") along with the words "Saa" (thumb touches second finger), "Taa" (middle finger), "Naa" (ring finger), and "Maa" (fifth finger). "Saa Taa Naa Maa" translates to "Birth, Life, Death, and Rebirth." The first round is chanted out loud, the next round whispered, the third is thought silently, the fourth is also whispered, and the fifth round is chanted out loud again. This sequence is repeated for 11 min and closes with a last minute of energetic integration and meditation. This technique is thought to engage different senses simultaneously (visualization, vocalization, motor, and sensory stimulation). Furthermore, the chanting and breathing pattern modulate respiratory muscles, lung volume, cardiovascular and autonomic nervous system functions (for a narrative review on singing, see [43]). Memory enhancement training (MET) MET involved 12 weekly in-person group classes presented by a qualified memory training instructor. The classes aimed to teach memory strategies, while participants completed weekly homework assignments and handed them in to ascertain participant compliance. MET was developed by researchers at the UCLA Longevity Center. This MET program involves a scripted curriculum for the trainer and a companion workbook for each participant. The detailed standard protocol for MET was derived from evidence-based techniques that use verbal and visual association, as well as practical strategies for memory learning [44,45]. MET is performed in small group sessions of 6-10 people and includes 1) education about memory; 2) introduction to memory strategies (described below); 3) instruction of the use of specific memory strategies; 4) home practice along with logs to track activity; 5) the discussion of noncognitive factors, such as self-confidence, anxiety, and negative expectations. Each weekly session has the same structure; trainers 1) document the number of participants per session, engagement in alternative treatments, and collect homework completion logs; 2) review the previous homework exercises to reinforce learned techniques; 3) teach new techniques, review, and conduct exercises in the group session; and 4) assign new homework for the following week. Participants were directed to spend approximately 20 min daily on homework and document their activity in their logs. Each group session was devoted to learning and practicing memory techniques, and 15 min were reserved for reviewing the completed homework. Specific techniques taught include the following: verbal associative techniques (such as the use of stories) to remember lists; organizational strategies (categorizing items on a grocery list); visual associative strategies for learning faces and names (adapted from [46]); learning to implement memory habits to recall where the person placed an item, what recent activities they performed (e.g. locking doors, turning off appliances); and how they can remember future tasks (i.e. appointments). Adherence and attendance Staff members tracked the attendance of participants for their weekly in-person training classes. Each participant was allowed a maximum of two missed classes. Participants self-reported if they had completed their homework of KK or their memory training tasks. Completed homework sheets had to be handed in to a staff member each time the participants came to the lab for their classes or testing. Additionally, participants were asked not to participate in any other mind-body practices during the trial period, such as Tai Chi, Qi Gong, or yoga. This adherence was also monitored at each class session. Neuroimaging protocol A high-resolution structural T1-weighted image was collected for each participant using a 3T Siemens Prisma scanner (Siemens, Erlangen, Germany) with a 32-channel head coil at the UCLA Ahmanson & Lovelace Brain Mapping Center (ALBMC). This was part of a 90-min protocol that also included a T2 scan, fMRI scans for resting state and two memory tasks, diffusion-weighted imaging, singlevoxel spectroscopy and a high-resolution scan of the hippocampus. Due to reasons of timing, comfort and technical difficulties, not all participants completed all scans. However, each participant who underwent MRI scanning, completed the T1 scan. Parameters were as follows: a multi-echo MPRAGE scan 0.8 mm 3 with isotropic voxel dimensions, 208 slices, TR = 2,500 ms, TE = 1.81, 3.6, 5.39 and 7.18 ms, TI = 1,000, FOV = 256 mm, matrix size = 256 × 256 mm, and a flip angle = 8 degrees. We performed automatic cortical reconstruction using Freesurfer version 6.0 (https://surfer.nmr.mgh.har vard.edu/). Preprocessing included the correction of magnetic field inhomogeneities, the extraction of brain volume from other tissue types, the segmentation of subcortical gray matter and cortical surface parcellation according to the Desikan-Killany atlas [47]. Subsequently, subject-specific templates were created using the automated longitudinal reconstruction pipeline. Baseline and post-treatment images were then registered to each participant's average template [48]. Resulting scans were carefully inspected by the same researcher for tissue misclassifications and misalignments between baseline and post-treatment scan. Corrections were performed manually. Cortical maps were smoothed using a Gaussian kernel of 10 mm full-width half-maximum. Subcortical volumes for the hippocampus and amygdala for each hemisphere were extracted for further analysis using statistical software (see Statistics). Statistics Baseline differences, as well as between-group differences in change in clinical scores and subcortical regions of interest (ROIs) (left and right amygdala and hippocampus) were tested using the Kruskal-Wallis test. Signed rank tests were used to examine within-group changes. The significance level for two-tailed testing was set to an alpha of 5%. Results for subcortical ROIs were corrected for multiple comparisons using the Benjamini-Hochberg procedure (with a false discovery rate of 10%). Q plots and Levene's test were used to assess normality and heteroscedasticity. Because of the small sample size (n = 11 in each of the two groups), non-parametric tests were used for all analyses. We tested group differences in whole-brain symmetrized percent change (SPC), the change rate with respect to the average GMV [i.e., 100 * (annualized rate between time point 1 and time point 2)/average] GMV using the longitudinal volume-based twostage voxel-wise general linear model using qdec (https://www.surfer.nmr.mgh.harvard.edu). SPC does not rely on the order of time points entered into the model and takes into account between-subject variation in inter-time point intervals [49]. Age was used as a covariate in the model. The voxel threshold and Monte-Carlo corrected cluster thresholds were set to p < 0.05. Additionally, we extracted GMV change values for each participant from the clusters and performed a post-hoc signed rank test (S-statistic) to test whether the change within each group was significantly different from zero. Age served as a covariate in this analysis as well. While the whole-brain model determined whether and where there were group differences in the brain changes, the signed rank test revealed which group contributed to the difference. Clinical effects The CONSORT diagram for the study is shown in Fig. 1. After screening 359 potential participants, 251 had to be excluded who either did not meet the inclusion criteria or declined to participate. Out of the remaining 108 participants who were screened over the phone, 79 were randomized to yoga or MET using a computer-generated assignment scheme, which assigns participants in a 1:1 ratio to each group. Twenty-six completed KY, while 37 Table 2). Anxiety and depression improved significantly within the yoga group but did not change in the MET group (Table 2). No side effects were reported. GMV effects Various large clusters across both hemispheres demonstrated a greater decrease in GMV in the MET compared to the yoga group (Fig. 2). The clusters were located in the left middle to superior temporal lobe, supramarginal extending down to the superior temporal cortex, the precentral cortex, medial superior frontal and lateral occipital cortices, as well as the right lateral postcentral cortex extending into the supramarginal cortex, superior parietal and precuneus, precentral stretching into pars opercularis, inferior parietal, superior temporal including the banks of the superior temporal sulcus, and the paracentral cortex (see cluster Cluster regions, size, and coordinates identified across various brain regions in both hemispheres. details in Table 3). The within-group change signed rank tests demonstrated that MET had significant GMV atrophy in all clusters, while the yoga group showed no significant changes in all but two clusters (for details see Table 4, Supplementary Table 2). Specifically, in the left precentral and left lateral occipital cortices, the yoga group showed significant increases in GMV. Further, the Yoga group showed a significant increase in the right hippocampus (see Table 4). This change was not significantly different from the change in the MET group when corrected for multiple comparisons (Kruskal-Wallis test statistic = 4.0, uncorrected p-value = 0.05, corrected p-value = 0.2). The other subcortical regions did not show significant changes in either group. Cluster and ROI changes in GMV within the yoga and the MET groups. * p < 0.05; * * p < 0.01. DISCUSSION In a three-month clinical trial of yoga on memory and brain morphometry in women with an increased risk of developing AD, we found that weekly KY sessions paired with daily homework of KK preserved GMV in various regions across the brain compared to an active memory training control group and additionally led to increased left precentral and lateral occipital cortex volume. The active control MET group showed no increases in any region compared to the yoga group. Furthermore, we found that the yoga group improved in anxiety and depression, although the change in scores was not significantly different from the change in the MET group. We found no effects of yoga on memory, perceived stress, or resilience, likely due to the short intervention duration and the fact that this group of women was not clinically impaired. The small sample size may also have contributed to this lack of clinical effects. It is possible that GMV change occurs on a different time trajectory than behavioral or clinical changes. However, it is also plausible that the effect of yoga on GMV was mediated by variables we have not considered in this study. For example, our yoga training was relatively complex, including weekly group practices involving poses, meditation and chanting, interspersed with the daily home practice of meditative chanting with mental visualizations. While the individual components of this practice may have affected very specific cognitive, sensory, and motor functions and domains, they may have had little influence on memory, subjective stress, and resilience. In a previous study using the same yoga versus MET paradigm in a group of mixed older men and women with MCI, we found memory improvements in both intervention groups [29]. The regions, in which we observed GMV decline with memory training, were widespread across the cortex. The clusters are consistent with maps of age-related cortical thinning and volume loss [50]. Whole-brain longitudinal effects of yoga intervention paradigms, including a movement practice, are not yet available to our best knowledge. In a similar mixed-sex population, we previously demonstrated a trend in increased dorsal anterior cingulate cortex increases in the memory training group compared to yoga, but no effects in the hippocampus [30]. While the mechanisms of the observed preservation of GMV in the yoga group are unclear, we hypothesize that it might be due to the multi-modal nature of our intervention. For example, the cluster in the left precentral cortex covered a large portion of the primary motor cortex, while the cluster in the right postcentral cortex largely overlaps with the primary somatosensory cortex. Both movement and sensation are strong components of our yoga training, which consisted of weekly posture classes but also the daily homework sessions included touching each finger to the thumb sequentially (motor and somatosensory) and chanting (motor). Other aspects of our training included meditation, visualization, motor planning and coordination, auditory stimulation, relaxation, attention, and verbal expression (i.e., chanting mantra). KK has previously been shown to increase cerebral blood flow in the right temporal cortex and the right posterior cingulate gyrus, while it decreased blood flow in left temporo-parietal and occipital gyri compared to rest, using single-photon emission tomography [51]. We observed right hippocampal volume increases in the yoga group. While this effect did not survive corrections for multiple comparisons, it was a hypothesis-driven specific analysis, based on, and in line with previous findings [26,27,30]. We believe that hippocampal growth in this population is likely to occur. For instance, a postmortem study found that healthy humans have the capacity for hippocampal neurogenesis until at least their 8 th decade of life [52]. According to the authors, angiogenesis and neuroplasticity decrease at older age in certain subparts of the hippocampus. It is therefore possible that even though our effect did not remain significant, beneficial neuroplastic changes may have occurred. Future studies should investigate the effect of longer yoga interventions on hippocampal growth in this population. For example, preserved hippocampal volume was found in a variety of exercise interventions, reposted by a recent large-scale meta-analysis [53]. The authors found that while the exercise groups consisting of aerobic exercise types across various healthy, psychiatric and dementiatype populations showed no change in hippocampal volume, the respective control groups demonstrated significant decreases that were in line with typical annual rates of age-related atrophy. What was surprising is that the hippocampal volume effects were significant for older samples over the age of 65. Furthermore, only interventions longer than 39 weeks were successful, and the effects were for acrosshemisphere, but not lateralized hippocampal volume. While our current study was not specifically an aerobic exercise program, certain KY exercises and rapid breathing techniques may have aerobic components. It is possible that we would be able to detect significant hippocampal preservation with longer training periods, assuming that the age-related hippocampal decline would be sharper after 6 or 12 months. We believe that our results demonstrate real GMV changes, despite the short intervention period of 12 weeks. While longitudinal research reporting the rates of cortical changes in different age and medical populations and across shorter time periods are scarce, there is evidence that the annual gray matter loss in healthy adults over 50 is more than 1% [54]. It has also been demonstrated in cross-sectional postmortem studies that even with advanced age, the capacity to form new tissue in the medial temporal lobe is still present [52]. In younger adults, early neuroplasticity training studies have shown that 12 weeks of juggling training was sufficient to increase cortical volume in visuospatial regions, which, after a small decrease, still remained far above the baseline level after a further 12-week follow-up [55]. Later, the same group demonstrated that such gray matter increases are already detectable after only seven days of juggling training, which appeared to be less likely due to performance or exercise, but more due to the fact that participants were acquiring a new skill [56]. It is therefore plausible, yet not systematically investigated, that older women with an increased risk of AD may undergo observable GMV decline across 12 weeks. This study has several limitations: first, the sample size was small and relatively homogenous in terms of participants' levels of education and socioeconomic status. Since lower educational levels are a risk factor for AD [57], our more highly educated and relatively healthy sample may not be representative of the general population. While we used non-parametric statistical methods that are robust to smaller sample sizes, additional effects of yoga on clinical or brain data are likely to be masked by these sample-size limitations. Second, the dropout rates differed between the groups for different reasons. Prior and during the intervention, 14 participants discontinued in the yoga group, while only two discontinued in the MET group. MRI eligibility was only half as problematic in the yoga group compared to the MET group. Eventually, the same number of participants was available for analysis in both groups. The difficulty with MRI scanning in this population is that older people have more implanted devices, which cannot be verified as safe due to the loss of medical record keeping if the surgery was performed more than twenty years prior. Similarly, some participants were able to undergo scanning at baseline, but had implant surgery during the intervention period and were no longer eligible for safety reasons at follow-up. In addition, some participants were reluctant to perform an MRI scan or to participate for other personal reasons. Third, we may be observing a "ceiling effect" in healthy women without significant cognitive impairment at baseline. Fourth, the duration of 12 weeks of intervention may not be sufficient to detect cognitive benefits. Longer interventions may have more beneficial effects on cognition. Furthermore, while our target population consisted of healthy older women with SCD and CVRFs, we did not compare their brain GMV and clinical scores to age-and education-matched controls without cognitive problems or CVRFs. We did not use a placebo or a usual care group to document natural progression of brain aging that may have supported observed GMV decreases in the MET group. The effect of yoga versus MET on low AD risk profile women is therefore still unknown and remains to be investigated. To conclude, we were able to demonstrate that three months of KY training combined with a daily practice of KK had protective effects on brain regions known to undergo age-related cortical decline and may lead to improvements in anxiety and depression in older women with subjective cognitive decline and CVRFs. While the long-term effects of this intervention and its efficacy in AD prevention or delay remain to be established, our results suggest that yoga can have beneficial effects on brain and mental health in this high-risk group. Since yoga is considered safe and has shown positive effects on a variety of stressors, the brain and mental health, yoga is a promising practice for older women with increased AD risk. Whether yoga has the capacity to delay the onset of AD remains to be established in the future.
2022-03-12T06:23:49.614Z
2022-03-11T00:00:00.000
{ "year": 2022, "sha1": "31a75299bd846a0a7b0f6e39be9b1ff723903eaf", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "88ab1fb2ab274fc088acb1d4f97170f7d955b0b8", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245420755
pes2o/s2orc
v3-fos-license
Multiscale model for forecasting Sabin 2 vaccine virus household and community transmission Since the global withdrawal of Sabin 2 oral poliovirus vaccine (OPV) from routine immunization, the Global Polio Eradication Initiative (GPEI) has reported multiple circulating vaccine-derived poliovirus type 2 (cVDPV2) outbreaks. Here, we generated an agent-based, mechanistic model designed to assess OPV-related vaccine virus transmission risk in populations with heterogeneous immunity, demography, and social mixing patterns. To showcase the utility of our model, we present a simulation of mOPV2-related Sabin 2 transmission in rural Matlab, Bangladesh based on stool samples collected from infants and their household contacts during an mOPV2 clinical trial. Sabin 2 transmission following the mOPV2 clinical trial was replicated by specifying multiple, heterogeneous contact rates based on household and community membership. Once calibrated, the model generated Matlab-specific insights regarding poliovirus transmission following an accidental point importation or mass vaccination event. We also show that assuming homogeneous contact rates (mass action), as is common of poliovirus forecast models, does not accurately represent the clinical trial and risks overestimating forecasted poliovirus outbreak probability. Our study identifies household and community structure as an important source of transmission heterogeneity when assessing OPV-related transmission risk and provides a calibratable framework for expanding these analyses to other populations. Trial Registration: ClinicalTrials.gov This trial is registered with clinicaltrials.gov, NCT02477046. Introduction Mass immunization with the live-attenuated oral poliovirus vaccines (OPV) have successfully reduced wild poliovirus (WPV) incidence by 99% and resulted in the eradication of two of the three poliovirus serotypes (Type 1 and Type 2) [1][2][3]. However, complete poliovirus eradication must include the three OPV vaccine viruses (Sabin 1, 2, and 3), which are transmissible and capable of reverting attenuation. Circulating, vaccine-derived poliovirus (cVDPV) can cause clinical poliomyelitis cases indistinguishable from those of WPV [4,5] and is a rapidly growing public health threat. Despite the withdrawal of Sabin 2 OPV from global routine immunization schedules in 2016 (during an event known as "the Switch"), cVDPV2 (Sabin 2 cVDPV) outbreaks have been increasing in frequency. The GPEI (Global Polio Eradication Initiative) is caught in a paradoxical situation because the current cVDPV2 containment strategy relies on immunization with monovalent Sabin 2 OPV (mOPV2) [6,7]. Nearly half of the outbreaks reported in 2019 were traced to a previous mOPV2 vaccination campaign [8]. Policymakers must develop strategies that ensure the complete eradication of poliovirus even as virus is reintroduced from mOPV2 vaccination. As global immunity against Type 2 poliovirus declines, population-specific differences in demography and social mixing will be increasingly relevant for quantifying vaccine virus transmission risk and outbreak probability [9][10][11]. Quantities such as outbreak probability and the probability of establishing endemic transmission [12,13] are sensitive to variations in transmission structure that are not always represented in poliovirus forecast models due to their reliance on mass action, which assumes homogeneous transmission and social mixing [9,14,15]. Of the forecast models used during the 2014-2015 West African Ebola outbreak, among the most accurate was a mechanistic model that defined spatio-temporal trends in transmission data based on collected field data [16]. Adopting a similar approach and generating a poliovirus transmission model that uses transmission data collected from mOPV2 campaigns or during a cVDPV2 outbreak could improve model accuracy and enable poliovirus eradication. Here, we present an agent-based, mechanistic model designed to assess OPV-related vaccine virus transmission risk in realistic populations with different immunity, demography, and social mixing patterns. Individual immunity and infections are simulated using a previously described poliovirus infection model informed by decades of clinical trial research [17]. The model uses a household evolution model informed by cultural anthropology studies and demographic health surveys to generate multiscale populations that organize individuals into dynamically changing households and other community structures. Simulated transmission incorporates transmission heterogeneity from differences in immunity, shedding duration, shedding concentration, shifting household compositions, and social contact preference. OPV genetic reversion [5] was not simulated in this study and is instead addressed in a follow-up study [18]. Here, we focused on comparing the transmission of a hypothetical, genetically stable Sabin 2 vaccine strain with a fully reverted cVDPV. To showcase the utility of our model, we present a simulation of mOPV2-related Sabin 2 transmission in rural Matlab, Bangladesh based on the viral shedding data collected from infants and their household contacts during an mOPV2 clinical trial [19]. Matlab is a highly structured population whose living arrangements and household and community structure are defined by predominantly family households, baris (a homestead of interrelated households) and villages. Once calibrated, the model generated Matlab-specific insights regarding poliovirus transmission following an accidental point importation or mass vaccination event. We also contrast the results with a simpler mass action model that assumes homogenous mixing to identify whether social mixing patterns, such as those imposed by household and community structure, impact poliovirus forecasting. Ethics statement This clinical trial was done according to the guidelines of the Declaration of Helsinki. The protocol was approved by the Research Review Committee (RRC) and Ethical Review Committee (ERC) of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr, b) and the Institutional Review Board of the University of Virginia. The data used in this project is from a randomized vaccine trial conducted in Matlab, Bangladesh in 2015-2016. Formal written consent was obtained from all adult study participants and all parent/guardian of underaged children in the study and formal verbal consent was obtained from all parent/guardian of children under 5 years of age who received the mOPV2 vaccine during the special immunization campaign. The clinical trial was registered at ClinicalTrials.gov, number NCT02477046. Clinical trial design and sample collection We previously performed an mOPV2 challenge campaign in the Maternal, Child, and Family Planning intervention region of rural Matlab, Bangladesh that is described in detail in ref. [19]. A schematic of the trial design is presented in S1 Fig. Briefly, villages were assigned with routine immunization with trivalent OPV (Sabin 1, 2, and 3), bivalent OPV (Sabin 1 and 3) + one inactivated poliovirus (IPV) dose, or bOPV + two IPV doses. All infants were vaccinated by one of these three routine immunization schedules unless a medical contraindication was present. A subset of infants, their two youngest household contacts, and community participants less than five years of age were enrolled in the study. For the subset of infants enrolled in the study, stool samples were collected before routine immunization and at age 18 weeks. Stool samples were also collected from their household contacts when the infant was 18 weeks of age. The mOPV2 campaign challenged approximately 33% of the enrolled infants, 6% of the household contacts, and 40% of the community participants with mOPV2. Stool samples from infants and household contacts were collected at weekly intervals in the first ten weeks following the mOPV2 vaccination campaign and at weeks 14, 18, and 22. No stool samples were collected from the community participants. For all participants, residence bari was known but no additional information regarding household structure (size or composition) was recorded. The stool samples collected after mOPV2 challenge were organized into a series of eight cohorts defined by the individual providing the sample (infant or household contact), their mOPV2 challenge status, and their household/bari membership ( Table 1). Cohorts 1-3 are infants while cohorts 4-8 are household contacts. The "challenged with mOPV2 "column indicates whether the individuals in this cohort received Sabin 2 vaccine (+ for yes,-for no) during the mOPV2 campaign. For cohorts 4-8, the "Infant status" column indicates whether the infant of the household contact received mOPV2. The "Bari status" column indicates whether any member in the bari received mOPV2. The "Infection Source" column indicates the type of transmission each cohort is most sensitive to. Note that this column lists the most likely transmission source but that transmission from other household community members is possible. Individuals in cohorts one, four, and seven received mOPV2 and shedding in these cohorts largely reflect individual infection dynamics. Model structure The model is agent-based and divided into three sub-models (Fig 1): 1) a household evolution model that simulates changes in composition due to births, deaths, and marriage (Fig 1B), 2) a previously described poliovirus infection model [17] that determines susceptibility, shedding duration, and viral shedding concentration (Fig 1C), and 3) a contact-based transmission model that mediates transmission by specifying the rate with which infected individuals contact other individuals based on their respective household and community memberships ( Fig 1D). A brief description of each model is presented below and further details are reported in the S1 Text. The poliovirus infection model defines immunity as the OPV-equivalent antibody titer and models susceptibility to infection as a dose-dependent response [17] (S1 Text). A consequence of this model is that individuals have a non-zero probability of being infected for all levels of pre-exposure immunity, with the probability of infection depending on the degree of immunity and viral dose. OPV-equivalent antibody is a measurement of the serum neutralizing antibody titers and peaks immediately after infection and wane over time. Individuals with lower OPV-equivalent antibody titers prior to infection shed more virus and for longer periods of time than individuals with higher OPV-equivalent antibody titers. Households and demographic structure are generated using the household evolution model. The model generates multigenerational families by simulating household evolution based on anthropological frameworks of household formation and dissociation [20,21] and PLOS COMPUTATIONAL BIOLOGY fertility, mortality, and marriage statistics from demographic health survey reports [22][23][24]. This model was used to generate multiscale populations where individuals and transmission events are grouped into a series of nested demographic scales that define household and community structure. For Matlab, these scales were defined by households, baris, and villages. Transmission depends on the total viral exposure per contact and the contact structure used to specify social mixing. For each infection, the model calculates the viral exposure per contact based on the viral shedding concentration of the infected individual and the expected fecal-oral dose (grams) per contact. The average daily fecal-oral dose for Matlab was unknown which are grouped into villages that together define the Matlab Region. Map borders were obtained and used with permission from the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr, b) (https: //github.com/InstituteforDiseaseModeling/community-structure-mediates-polio-transmission/tree/main/ PopSim/data/shapefiles). B) Household evolution model. Households are represented as pedigree trees and their composition changes as individuals are born and removed either through death or marriage. The model is parameterized by the age-and sex-specific fertility, mortality, marriage of individuals and the rate with which newlywed couples move out after marriage (S1 Text). C) Poliovirus infection model. On infection, infected individuals receive a boost in OPV-equivalent antibody titer and assigned a shedding duration based on their titer prior to infection. Individual antibody titers are dynamic and declines over time. Viral shedding is measured in cell culture infectivity dose 50 (CCID50) units per fecal oral dose and depends on the antibody titer preceding infection and the time since the infection started. Infections shed less virus during the later stages of infection. D) Contact-based transmission. Simulated poliovirus transmission is the result of direct contact with infected individuals. Multiscale transmission is specified using four contact rates (β household , β bari, β village , β intervillage ) that specify the number of household, bari, village, and non-village members infected individuals contact per day. Homogeneous transmission (mass action) is specified with a single contact rate (β ma ) can be used to specify the number of individuals infected individuals contact per day, regardless of household and community structure. This results in a heavy bias towards contacting non-village members who are much more numerous than thehousehold, bari, and village contacts of an individual. https://doi.org/10.1371/journal.pcbi.1009690.g001 and its value was determined during model calibration (S1 Text). During each daily timestep, infected individuals randomly sample and transmit virus to other individuals in the population. When simulating multiscale, household and community transmission, the contact-based transmission model was parameterized using four contact rates (β household , β bari , β village , and β intervillage ). These rates determine the number of individuals to household, bari, village, and non-village members to sample. When assuming homogeneous transmission, a single contact rate, β ma (where ma stands for mass action) determines the number of individuals to sample from the entire simulation. Simulating Matlab community structure The first study goal was to generate synthetic populations using the household evolution model, which simulates household evolution following births, deaths, and marriage based on anthropological rules of household formation and secession (Materials and Methods). The household evolution model was calibrated to demographic health survey reports [22][23][24] that, on average, capture the household and community structure of rural Bangladesh. Household and community structure was defined and evaluated on four criterion: 1) the household size distribution (Fig 2A), 2) the age distribution of infants and their two youngest household contacts observed in the study (Fig 2B), 3) the population age pyramid (Fig 2C), and 4) the size distribution of the 45 villages assigned with routine bOPV immunization (Fig 2D). The contact age distributions (Fig 2B) were not used in fitting the model. Rather, the observed similarity between data and model derives from the specification of individual, age-and sex-specific fertility, mortality, and marriage rates. PLOS COMPUTATIONAL BIOLOGY Simulating the Bangladeshi age pyramid required additional information regarding historical fertility and mortality rates. Historical fertility and mortality rates were approximated by extrapolating the rates reported in the 2004, the year of the earliest obtainable report with both, and 2014 Bangladesh Demographic Health Surveys (BDHSS) [22,23]. This approximation is imperfect, as it underestimates the proportion of individuals under fifteen and overestimates the proportion over fifty. Nonetheless, the simulated age pyramid reproduced a similar flared-base at younger age groups, a defining characteristic of populations with rapidly declining fertility rates [23]. Future model fits could be improved by incorporating historical fertility and mortality rates from earlier timepoints. To replicate the conditions of the Matlab study population prior to the mOPV2 campaign, we recapitulated the demographic and immune structure of the 45 villages assigned with bOPV routine immunization. The 22 villages assigned with routine tOPV immunization were excluded and residual tOPV-derived Sabin 2 transmission was not simulated. Infants were assigned OPV-equivalent antibody titers consistent with bOPV routine immunization and adults were assigned titers consistent with those expected of Matlab, Bangladesh (S1 Text). Calibrating vaccine virus transmission to the mOPV2 clinical trial The next study goal was to test whether household and community structure affected transmission heterogeneity. We compared the model fits of the multiscale and mass action contactbased transmission models. These models depend on specifying daily contact rates of infected individuals. The multiscale model has four transmission rate parameters-one for each level of social hierarchy-and allows transmission to be defined by household and community structure. The mass action model assumes homogeneous transmission with a single transmission rate parameter. (Materials and Methods, S1 Text). These models were calibrated to the eight cohort-specific shedding profiles collected after the mOPV2 campaign. Latin hypercube sampling and a pseudo-likelihood function to minimize the discrepancy between simulated and observed data were used to identify optimal parameters (S1 Text). These shedding profiles ( Table 1) reflect vaccination with mOPV2 (cohorts one, four, and seven), transmission that mostly likely originated from a nearby shedding household or bari member (cohorts two, six, and five), and transmission that mostly likely originated from a village or non-village member (cohorts three and eight). These shedding profiles alone were insufficient to reject homogeneous transmission as a possible transmission mechanism (S1 Text). Initial calibration attempts with the multiscale model were unsuccessful, as a wide variety of equally reasonable fits could be obtained with multiple combinations of household, bari, within-village, and inter-village contact rates. Under standard parsimony practices, this would suggest that household and community structure does not perturb transmission significantly enough to reject the assumption of homogenous transmission or mass action. Alternatively, it could suggest that these shedding profiles lacked the power to differentiate transmission between household, bari, village, or non-village members. Fortunately, transmission following tOPV routine immunization during the enrollment period of the study provided an additional source of evidence to determine whether transmission is dependent on household community membership. From the small number of tOPVderived transmission observed during the period prior to the mOPV2 campaign, we extrapolated bounds on the within-and between-village transmission rates for the mOPV2 campaign to serve as priors for model calibration (S1 Text). With these additional priors, the fourparameter multiscale model became identifiable (S1 Text). We tested three different transmission models: two mass action models, one with (Fig 3, green) and without the additional tOPV-derived transmission data (Fig 3, purple), and a multiscale model (Fig 3, orange) calibrated with the additional tOPV-derived transmission data. When assessed on the cohort shedding data alone, the multiscale and mass action model calibrated without the tOPV-derived priors from the enrollment period provided reasonable model fits to cohort shedding data (Fig 3). The Akaike Information Criterion (AIC) for the multiscale model (S1 Text) was 2352.59 and 2355.40 for the mass action model calibrated without the tOPV data, and 2365.48 for the mass action model calibrated with the tOPVderived priors. The better fit of the multiscale model was most prominent in cohort six, which monitored shedding in non-vaccinated household contacts of vaccinated infants and was thus most sensitive to household transmission. When assessed using all the evidence, the multiscale model was superior (AIC = 2350.17) to either of the mass action models, whose AICS were 2874.52 and 2380.87 with and without the tOPV-derived data. Taken together, these results show that poliovirus vaccine virus transmission in Matlab does depend on household community membership but that the cohort shedding data on its own lacked the power to show this. The difference in model fit resulted from a mechanistic difference in how transmissions were distributed. Both mass action models were rejected due their inability to match the low levels of between-village transmission inferred from the tOPV-derived transmission data. The multiscale model predicted that 85% (83, 88) transmissions were made among bari and village members while both mass action models predicted >99% of events were made between nonvillage members. Mass action favors inter-village transmission because the chances of sampling a non-village member far exceeds the probability of sampling a household, bari, or village member (S1 Table). Forecasting vaccine virus transmission risk in Matlab, Bangladesh We examined the consequences of the Switch on Sabin 2 vaccine virus transmission in Matlab. Three scenarios were examined: 1) Point importation: Sabin 2 vaccine virus transmission introduced by the accidental importation of a single Sabin 2 shedding infant, 2) Mass vaccination: Sabin 2 vaccine virus transmission following mOPV2 vaccinations campaign with up to 80% coverage in children under five years of age, and 3) cVDPV2 importation: cVDPV2 transmission following the accidental importation of a single cVDPV2 infected infant. To investigate the consequences of the Switch, these scenarios were performed in populations where routine Sabin 2 OPV vaccination had been discontinued for up to five years. Population-level immune profiles immediately before and up to forty years following the Switch are shown in S2 Fig. Quantifying point importation outbreak risk Accurately quantifying and identifying populations with high outbreak probability is critical for preventing eventual poliovirus eradication and preventing cVDPV spread. We explored the practical consequences of inaccurately assuming homogeneous transmission when generating outbreak forecasts by generating two predictions: one using the fully calibrated multiscale model, and one using the mass action model calibrated without the tOPV-derived transmission data to determine whether the additional transmission heterogeneity imposed by household and community structure represented in the multiscale model was necessary for accurate risk assessment. Both models predicted outbreaks with highly heterogeneous outcomes. As a whole, assuming mass action increased the probability of severe outbreaks with larger infection numbers and transmission durations (Fig 4A-4C). Under the multiscale model (Fig 4B), point importations occurring five years after the Switch had a median transmission duration of 81 (9, 400) days of transmission, a maximum of 7 (1, 89) infections at the peak, and a total of 240 (8, 13000) infections. When evaluating the outbreak risks associated with point importation, we reasoned that only the most severe outcomes would be detectable during routine surveillance and be informative to interventions. For the purposes of this study, a severe outcome was defined by an outbreak that persisted for at least 300 days. Under this definition, 0. To determine whether the differences in model prediction reflected differences in transmission failure resulting from stochastic loss, high population-level immunity, or both, we examined the simulated immunity profiles during each point importation (Fig 4D). Under the multiscale model, the immunological profiles suggested that the vast majority of point importations failed due to stochastic transmission failure, as the outbreaks had no impact on population immunity and failed when 20% of the population was susceptible (OPV-equivalent antibody titer < 8). Point importations under the mass action model were more complex. Outbreaks had no effect on population immunity during the first 100 days and point importations that failed during this period were likely the result of stochastic transmission failure. However, when conditioned on severe outbreaks, the simulations revealed a decrease in population immunity that was coincided with each simulation's peak infection time. This immunological profile suggested that the outbreaks generated by the mass action were more strongly limited by the depletion of susceptible individuals in the population. Evaluating the benefits and risks associated with mass mOPV2 vaccination To examine the benefits and risks of mass mOPV2 vaccination, we simulated mass vaccination campaigns with up to 80% coverage in children under five (Fig 5). These campaigns were (Fig 3, purple). Lasagna plot of showing the total persistence times and number of infected individuals following a point importation five years after the Switch using the multiscale (B) and mass action (C) models. Each line in the lasagna plot is a different simulation and 2000 iterations are shown. The length of each line indicates the persistence time and the color the number of shedding individuals at that timepoint. D) Immune profile of the first 300 days after a point importation using the multiscale (orange) and mass action (purple) model calibrated without the tOPV data during a severe outbreak. Severe outbreaks were defined as having a transmission duration of at least 300 days. Susceptibility was defined as having a OPV-equivalent antibody titer < 8. The increase in susceptibility over time is due to waning immunity and births. https://doi.org/10.1371/journal.pcbi.1009690.g004 evaluated for 1) their effectiveness at promoting population immunity (Fig 5A and 5B) and 2) the risks associated with reintroducing live poliovirus and vaccine-derived transmission (Fig 5C-5E). Interestingly, the disparity between the multiscale and mass action models decreased with vaccination coverage and was smallest when coverage was 80% (Fig 5A and 5B, grey). With the exception of the results shown in Fig 5A and 5B, all analyses were made using the multiscale model. As expected, mass mOPV2 vaccination improved population immunity against type 2 poliovirus due to a mix of both direct vaccination and vaccine-derived virus transmission (Fig 5A and 5B). Vaccine virus transmission reduced the proportion of susceptible children under five by an additional 10-15% within the first month of the campaign. For an 80% vaccination campaign performed five years after the Switch, primary vaccination reduced the proportion of susceptible children under five to 0.257 (0.250, 0.263) with a further reduction to 0.145 (0.137, 0.154) after 29 days from vaccine virus transmission (Fig 5A). This proportion was similar to the proportion of susceptible children under prior to the Switch (S2B Fig). However, population susceptibility (Fig 5A and 5B) rebounded as vaccine virus transmission waned (Fig 5C) and the direct immunological benefits conferred by the vaccination campaign were reversed within the first year. Two factors affected the duration of vaccine virus transmission: 1) the time since the Switch, and 2) the coverage level of the vaccination campaign. As expected, the expected vaccine virus transmission duration following each campaign has increased since the withdrawal of routine immunization (Fig 5D and 5E). Immediately after the Switch, increasing mOPV2 vaccination coverage from 10% to 80% caused the expected transmission duration to increase from 230.1 (225.68, 234.67) to 298.38 (293.29, 303.71). Five years after the Switch, the expected durations of vaccine virus transmission were 301.62 (297.76, 305.59) and 325.14 (320.99, 329.74) days for a 10% and 80% coverage campaign (Fig 5E). To examine the consequences of multiple, single-dose mOPV2 campaigns, we also examined a campaign with two single-dose mOPV2 interventions (Fig 6). The first occurred five years after the Switch and the second occurred one year after the first. Susceptibility among children under five accumulated less rapidly after the second campaign (Fig 6A). The proportion of children under 5 that were susceptible to infection was 0.48 (0.47, 0.49) one year after the first campaign and 0.33 (0.33, 0.35) one year after the second. The model also predicted less vaccine-virus transmission after the second campaign (Fig 6C-6E) in terms of the number of shedding individuals and the total duration of vaccine virus transmission. [17] Forecasting circulating vaccine-derived poliovirus transmission risk Finally, we examined if the point importation of a cVDPV2 infected individual could cause an outbreak in Matlab. cVDPV2 infections were modeled by increasing the viral infectivity and shedding durations using the corresponding parameters for WPV described in [17]. Simulated cVDPV2 viruses were three times more infectious and caused infections that shed 1.4 times longer than Sabin 2 in immunologically naïve individuals [17]. The model predicted that the cVDPV2 outbreak risk in Matlab has grown since the Switch. Before the Switch, the model predicted that most cVDVPV2 importations were self-limiting and ceased transmission within the first year (Fig 7A). 0.008 of the point importations resulted in severe outbreaks with a median peak infection count of 170 (72, 256). Five years after the Switch, the model predicted that 0.61 of point importations would result in severe outbreaks with a median peak infection count of 9534.0 (9447.0, 9668.0). While the expected transmission duration of these severe outbreaks was 516 (508, 524), 0.03 of the point importations sustained transmission for more than two years. Discussion Rapid mOPV2-related risk assessments are needed to guide policy decision-making and ensure the continued safety and efficacy of OPV vaccines. To enable future risk assessment, we developed a new poliovirus transmission model that differs from previous dynamic poliovirus model [9,10,25,26] by integrating the biological aspects describing the immunity and shedding durations of infected individuals [17] with the social aspects of transmission described by household and community structure. Overall, the model showed that the decline in type 2 immunity following the Switch has elevated vaccine virus transmission and increased the frequency and severity of severe point importation outbreaks. The increased risk is particularly significant for cVDPV2 exportation, owing to the increase in infectiousness due to genetic reversion. [6,14] For Matlab, we found that Sabin 2 transmission was congruent with a model centered on transmission between local bari and village members. Surprisingly, the multiscale model predicted that bari members have the highest burden of infection, which be partially due to the sharing of communal kitchens within each bari. This pattern of localized transmission is supported by cholera surveys in Matlab [27], which have reported strong spatial clustering of cases at distances less than two kilometers and within baris [28,29]. The cumulative effect of PLOS COMPUTATIONAL BIOLOGY household and community structure is an increase in the stochasticity of transmission. Mass action underestimates stochastic transmission loss and risks over-estimating transmission when the initial infection count is small, such as during point importation or a low coverage vaccination campaign. Accurately characterizing the stochastic transmission heterogeneity is critical for risk assessments as populations with identical immunological histories will have different risks depending on how contact and or transmission structure influences stochastic transmission loss. While our simulations showed that vaccine virus transmission during a mass mOPV2 campaign extends immunity beyond the primary vaccination recipients, this benefit is limited to the first few weeks after a campaign. The simulations predict that residual vaccine virus transmission can be maintained for months after the mOPV2 campaign. This transmission is too limited to impact population immunity but poses a significant risk for cVDPV2 emergence and exportation. These results are consistent with the tracing of cVDPV2 outbreaks to mOPV2 campaigns performed in other regions months after the original campaign ended [8]. Unsettlingly, the point importation simulations suggest that detected cVDPV2 outbreaks represent a small fraction of all importation events, and that vaccine virus exportation is common. Preventing the exportation of vaccine virus to non-intervention regions will be a major challenge for future poliovirus eradication campaigns. The need to manage this risk supports the practice of using repeated, ideally high coverage mOPV2 campaigns that promote immunity and better limit vaccine-derived transmission by depleting the availability of susceptible individuals with low immunity, reducing the probability of new infection, and reducing the viral shedding concentration and durations of infected individuals [17] (Supplemental). A subject of future study would be to characterize the optimum frequency of vaccination campaigns that maximally utilizes the limited global stock of OPV while reversing the accumulation of susceptible individuals with as little residual vaccine-derived transmission as possible. However, these campaigns must be coupled with precautions designed to prevent vaccine-derived virus from escaping the intervention zone, as any OPV vaccine-based intervention strategy will always bear a significant risk of seeding other populations with vaccine virus as global type 2 immunity continues to decline. Heightened surveillance of regions proximal to the targeted intervention zone will be a necessary precaution and will need to detect any potential vaccine virus exportation early enough to prevent its spread. Using the model to generate population-specific estimates of poliovirus transmission risk remains a challenge. A major goal of this study was to enable improved risk assessment in regions such as sub-Saharan Africa, Afghanistan, and Pakistan, where recurrent cVDPV2 outbreaks and sustained vaccine-derived transmission are prevalent and immunity to type 2 poliovirus has historically been low [30,31]. To our knowledge, it is unknown how heterogeneous transmission operates in these regions or whether certain regions are expected to have differing levels of transmission heterogeneity due to differences in household and community structure. While the clustering of households into baris made Matlab an ideal study location to examine the effects of household and community structure on poliovirus transmission, that same preference makes it difficult to assess whether poliovirus transmission in other regions is similarly impacted. Thus, assessing the impact of household and community structure and determining the extent to which mass action is inappropriate will be an important step for enabling accurate cVDPV2 outbreak and transmission risk in regions outside of Matlab. In conclusion, our study provides a framework for generating population-specific assessments of OPV-related transmission risk and stresses the importance of household and community membership when modeling poliovirus transmission. We note that our current study underestimates the true risks associated with mOPV2 vaccination because the model does not yet simulate the vaccine virus attenuation reversal due to genetic reversion [5]. Integrating vaccine virus evolution will be necessary to truly assess the risks associated with cVDPV2 emergence and the duration of vaccine-derived virus circulation following mOPV2 usage. Future iterations of the model could also explore [5] the use of the novel oral polio vaccines [32], and evaluate whether the incorporation of other social factors, such as preferential contact structures based on age, occuputation, and time, would improve model fit [33]. Other applications include replacing the poliovirus infection to simulate the transmission of other fecal-oral diseases, such as typhoid, shigellosis, and other diarrheal diseases [34][35][36]. Table. Contact probabilities. The average number of household, bari, village, and non-village members reletive to each infected individual. The probability of contacting each of these individuals under mass action was quantified as the average number of individuals in each category divided by the average number of individuals in the population. The expected number of contacts (E(Contact)) was quantified by taking these probabilities and multiplying it with the number of contacts an individual infection makes using the β ma parameter of the mass action model calibrated without the tOPV data. The expected number of contacts for the multiscale model were the same as the β parameters identified for that model.
2021-12-23T06:22:44.768Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "d0ea574786416190e48b3c09ded7b357410cd595", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1009690&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcc9d364f5b9ec85612603bf4d9253dd25c7c30a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119277901
pes2o/s2orc
v3-fos-license
Measurement of Dijet Production in Diffractive Deep-Inelastic Scattering with a Leading Proton at HERA The cross section of diffractive deep-inelastic scattering ep \rightarrow eXp is measured, where the system X contains at least two jets and the leading final state proton is detected in the H1 Forward Proton Spectrometer. The measurement is performed for fractional proton longitudinal momentum loss xIP<0.1 and covers the range 0.1<|t|<0.7 GeV2 in squared four-momentum transfer at the proton vertex and 4<Q2<110 GeV2 in photon virtuality. The differential cross sections extrapolated to |t|<1 GeV2 are in agreement with next-toleading order QCD predictions based on diffractive parton distribution functions extracted from measurements of inclusive and dijet cross sections in diffractive deep-inelastic scattering. The data are also compared with leading order Monte Carlo models. Abstract The cross section of diffractive deep-inelastic scattering ep → eXp is measured, where the system X contains at least two jets and the leading final state proton is detected in the H1 Forward Proton Spectrometer. The measurement is performed for fractional proton longitudinal momentum loss x P < 0.1 and covers the range 0.1 < |t| < 0.7 GeV 2 in squared four-momentum transfer at the proton vertex and 4 < Q 2 < 110 GeV 2 in photon virtuality. The differential cross sections extrapolated to |t| < 1 GeV 2 are in agreement with next-to-leading order QCD predictions based on diffractive parton distribution functions extracted from measurements of inclusive and dijet cross sections in diffractive deep-inelastic scattering. The data are also compared with leading order Monte Carlo models. Introduction Diffractive processes such as ep → eXY , where the systems X and Y are separated in rapidity, have been studied extensively in deep-inelastic scattering (DIS) at the electron 1 -proton collider HERA [1][2][3][4][5][6][7][8]. Diffractive DIS events can be viewed as resulting from processes in which the photon probes a net colour singlet combination of exchanged partons. The photon virtuality Q 2 , the high transverse momentum of jets or a heavy quark mass can provide a hard scale for perturbative QCD calculations. For semi-inclusive DIS processes such as ep → eXp the hard scattering QCD collinear factorisation theorem [9] allows the definition of diffractive parton distribution functions (DPDFs). The dependence of diffractive DIS on a hard scale can thus be treated in a manner similar to the treatment of inclusive DIS, for example through the application of the DGLAP parton evolution equations [10][11][12][13][14]. DPDFs have been determined from QCD fits to diffractive DIS measurements at HERA [2,3,8]. The inclusive diffractive DIS cross section is directly proportional to the sum of the quark DPDFs and constrains the gluon DPDF via scaling violations. The production of diffractive hadronic final states containing heavy quarks or jets proceeds mainly via boson gluon fusion (BGF) and therefore directly constrains the diffractive gluon density [3,8]. In previous analyses at HERA, diffractive DIS events have been selected on the basis of the presence of a large rapidity gap (LRG) between system Y , which consists of the outgoing proton or its dissociative excitations, and the hadronic final state, system X [3,4]. The main advantage of the LRG method is its high acceptance for diffractive processes. A complementary way to study diffraction is by direct measurement of the outgoing proton, which remains in-tact in elastic interactions. This is achieved by the H1 experiment using the Forward Proton Spectrometer (FPS) [15,16], which is a set of tracking detectors along the proton beam line. Despite the low geometrical acceptance of the FPS, this method of selecting diffractive events has several advantages. The squared four-momentum transfer at the proton vertex, t, can be reconstructed with the FPS, while this is only possible in exclusive final states in the LRG case. The FPS method selects events in which the proton scatters elastically, whereas the LRG method does not distinguish between the case where the scattered proton remains intact or where it dissociates into a system of low mass M Y . The FPS method also allows measurements to be performed at higher values of fractional proton longitudinal momentum loss, x P , than possible using the LRG method. This paper presents the first measurement of the cross section for the diffractive DIS process ep → ejj X p, with two jets and a leading proton in the final state. The diffractive dijet cross sections are compared with next-to-leading order (NLO) QCD predictions based on DPDFs from H1 [2,3] and with leading order (LO) Monte Carlo (MC) simulations based on different models. The dijet cross sections are measured for two event topologies: for a topology where two jets are found in the central pseudorapidity range, labelled as 'two central jets', and for a topology where one jet is central and one jet is more forward 2 , labelled as 'one central + one forward jet'. The universality of DPDFs is studied using events with two central jets. The distributions of the proton vertex variables x P and t are compared to those of the inclusive diffractive DIS case. This comparison tests the proton vertex factorisation hypothesis which assumes that the DIS variable factorise from the four-momentum of the final state proton. The data are also compared directly with the LRG measurement of the dijet cross section in diffractive DIS [3] in order to test the compatibility of the two experimental techniques. Finally, events with one central and one forward jet are used to investigate diffractive DIS in a region of phase space where effects beyond DGLAP parton evolution may be enhanced. This topology is not accessible with the LRG method since the rapidity gap requirement limits the pseudorapidity of the reconstructed jets to the central region. Figure 1 illustrates the dominant process for diffractive dijet production in DIS. The incoming electron with fourmomentum k interacts with the proton with four-momentum P via the exchange of a virtual photon with four-momentum q. The DIS kinematic variables are defined as: Kinematics where Q 2 is the photon virtuality, x is the longitudinal momentum fraction of the proton carried by the struck quark and y is the inelasticity of the process. These three variables are related via Q 2 = xys, where s denotes the ep centre-ofmass energy squared. The hadronic final state of diffractive events consists of two systems X and Y , separated by a gap in rapidity. In general, the system Y is the outgoing proton or one of its low mass excitations. In events where the outgoing proton remains intact, M Y = m p , the mass of the proton. The kinematics of diffractive DIS are described by: where x P denotes the longitudinal momentum fraction of the proton carried by the colour singlet exchange, t is the squared four-momentum transfer at the proton vertex and β is the fractional momentum of the diffractive exchange carried by the struck parton. The longitudinal momentum fraction of the diffractive exchange carried by the parton entering the hard scatter is where v is the four-momentum of the parton. Theoretical framework and Monte Carlo models Within Regge phenomenology, cross sections at high energies are described by the exchange of Regge trajectories. The diffractive cross section is dominated by a trajectory usually called the Pomeron (P). In analyses of HERA data [2,3,8], diffractive DIS cross sections are interpreted assuming 'proton vertex factorisation' which provides a description of diffractive DIS in terms of a resolved Pomeron [17,18]. The QCD factorisation theorem and DGLAP parton evolution equations are applied to the dependence of the cross section on Q 2 and β, while a Regge inspired approach is used to express the dependence on x P and t. The resolved Pomeron (RP) model [17] is implemented in the RAPGAP event generator [19]. RAPGAP implements both a leading Pomeron (P) trajectory and a sub-leading 'Reggeon' (R). In this analysis the DPDF H1 2006 Fit B [2] is used, which employs the Owens pion PDFs [20] for the partonic content of the Reggeon. The Reggeon contribution is significant for x P > 0.01. Higher order QCD radiation is modelled by parton showers. Processes with a resolved virtual photon are also included, with the photon structure function given by the SaS-G 2D LO parameterisation [21]. In the two-gluon Pomeron (TGP) model [22,23], the diffractive exchange is modelled at LO as the interaction of a colourless pair of gluons with a qq or qqg configuration emerging from the photon. The model is implemented in the RAPGAP generator. Higher order effects are simulated using parton showers. The unintegrated gluon PDF of set A0 [24] is used. In the soft colour interaction (SCI) model [25,26], the diffractive exchange is modelled via non-diffractive DIS scattering with subsequent colour rearrangement between the partons in the final state, which can produce a colour singlet system separated by a large gap in pseudorapidity. A refined version of the SCI model which uses a generalised area law (GAL) for the probability of having a soft colour interaction [27] is used in this analysis (SCI+GAL). Predictions for diffractive dijet production within the SCI+GAL model are obtained using the leading order generator program LEPTO [28]. Higher order effects are simulated using parton showers [29,30]. The calculations are based on the CTEQ6L [31] proton PDFs. The probability for a soft colour interaction, R, has been tuned to 0.3 to describe the total diffractive dijet cross section as measured using the 'two central jets' topology. In all three models hadronisation is simulated using the Lund string model [32] implemented within the PYTHIA program [33,34]. In this analysis the dijet cross section is also compared to NLO QCD calculations. Assuming proton vertex factorisation, NLO QCD predictions for the diffractive partonic dijet cross section are calculated in bins of x P using the NLO-JET++ [35] program and integrated over the full x P range of the measurement. The renormalisation and factorisation scales are set to μ r = μ f = Q 2 + P * T 2 , where P * T is the mean of the transverse momenta of the two leading jets in the hadronic centre-of-mass frame. In order to estimate the uncertainties of the NLO QCD calculations due to missing higher orders, the factorisation scale μ f and renormalisation scale μ r are varied simultaneously by factors of 0.5 and 2. The average uncertainty arising from the variation of the scale is about 33 %. The DPDFs used in the NLO QCD calculations are H1 2006 Fit B [2] and H1 2007 Jets [3]. The H1 2007 Jets fit is based on the diffractive inclusive and dijet data while H1 2006 Fit B is based on inclusive diffractive data only. The uncertainty of the NLO QCD calculations due to DPDFs is estimated by propagating the DPDF errors. The DPDF errors are available only for the DPDF set H1 2006 Fit B. The average uncertainty resulting from the DPDF errors is about 7 % which is much smaller than the scale uncertainty. In the NLOJET++ calculations the strong coupling is set via Λ (4) MS = 340 ± 37 MeV for four flavours, which corresponds to the value of α (5) s (M Z ) = 0.119 ± 0.002 for five flavours in the 2-loop approximation [36,37]. The average uncertainty resulting from the variation of α s (M Z ) is about 1.5 %. In order to demonstrate the size of the NLO corrections, the QCD calculations are also performed at leading order. The NLO QCD partonic cross sections are corrected to the level of stable hadrons by evaluating effects due to initial and final state parton showering, fragmentation and hadronisation. The hadronisation corrections are defined in each bin as a ratio of the cross section obtained at the level of stable hadrons to the partonic cross sections. Two sets of hadronisation corrections have been obtained using the RAPGAP generator using two different parton shower models: parton showers based on leading logarithm DGLAP splitting functions in leading order α s [10][11][12][13] and parton showers based on the colour dipole model as implemented in ARIADNE [38]. The nominal set of corrections (1 + δ had ) is taken as the average of the two sets, while the difference between them is considered as the hadronisation uncertainty. The average hadronisation corrections are of about 0.9 with an estimated uncertainty of about 7 %. Uncertainties of the NLO QCD predictions arising due to scale variations and hadronisation corrections are added in quadrature. In order to compare with the results of the FPS measurements, NLO QCD predictions as well as predictions of the RP model are scaled down by a factor of 1.20 [16] due to the fact that the DPDF sets H1 2006 Fit B and H1 2007 Jets use LRG data which contain a proton dissociation contribution. The t-dependence of the P and R fluxes implemented in the H1 DPDF sets and the RP model are tuned to reproduce the t-dependence measured in inclusive diffractive DIS with a leading proton in the final state [15]. Experimental technique The e ± p data used in this analysis were collected with the H1 detector in the years 2005 to 2007 and correspond to an integrated luminosity of 156.6 pb −1 . During this period the HERA collider was operated at electron and proton beam energies of E e = 27.6 GeV and E p = 920 GeV respectively, corresponding to an ep centre-of-mass energy of √ s = 319 GeV. H1 detector A detailed description of the H1 detector can be found elsewhere [39][40][41]. Here, the components most relevant for the presented measurement are described briefly. A righthanded coordinate system is employed with the origin at the nominal interaction point, where the z-axis pointing in the proton beam or forward direction and the x(y) axis points in the horizontal (vertical) direction. The polar angle θ is measured with respect to the proton beam axis and the pseudorapidity is defined as η = − ln tan(θ/2). The Central Tracking Detector (CTD), with a polar angle coverage of 20 • < θ < 160 • , is used to reconstruct the interaction vertex and to measure the momenta of charged particles from the curvature of their trajectories in the 1.16 T field provided by a superconducting solenoid. Scattered electrons with polar angles in the range 154 • < θ e < 176 • are measured in a lead/scintillatingfibre calorimeter, the SpaCal [41]. The energy resolution is σ (E)/E ≈ 7 %/ √ E[GeV] ⊕ 1 % as determined from the test beam measurement [42,43]. A Backward Proportional Chamber (BPC) in front of the SpaCal is used to measure the electron polar angle. The finely segmented Liquid Argon (LAr) sampling calorimeter surrounds the tracking system and covers the range in polar angle 4 • < θ < 154 • corresponding to a pseudorapidity range −1.5 < η < 3.4. The LAr calorimeter consists of an electromagnetic section with lead as the absorber and a hadronic section with steel as the absorber. The total depth varies with θ between 4.5 and 8 interaction lengths. The energy resolution, determined from test beam measurements [42,43], is σ (E)/E ≈ 11 %/ √ E[GeV] ⊕ 1 % for electrons and σ (E)/E ≈ 50 %/ √ E[GeV] ⊕ 2 % for hadrons. The hadronic final state is reconstructed using an energy flow algorithm which combines charged particles measured in the CTD with information from the SpaCal and LAr calorimeters [44]. The luminosity is determined by measuring the rate of the Bethe-Heitler process ep → epγ detected in a photon detector located at z = −103 m. The energy and scattering angle of the leading proton are obtained from track measurements in the FPS [45]. Protons scattered at small angles are deflected by the proton beamline magnets into a system of detectors placed within the proton beam pipe inside two movable stations, known as Roman Pots. Both Roman Pot stations contain four planes, where each plane consists of five layers of scintillating fibres, which together measure two orthogonal coordinates in the (x, y) plane. The fibre coordinate planes are sandwiched between planes of scintillator tiles used for the trigger. The stations approach the beam horizontally and are positioned at z = 61 m and z = 80 m. The detectors are sensitive to scattered protons which lose less than 10 % of their energy in the ep interaction and are scattered through angles below 1 mrad. The energy resolution of the FPS is approximately 5 GeV within the measured range. The absolute energy scale uncertainty is 1 GeV. The effective resolution in the reconstruction of the transverse momentum components of the scattered proton with respect to the incident proton is determined to be ∼50 MeV for P x and ∼150 MeV for P y , dominated by the intrinsic transverse momentum spread of the proton beam at the interaction point. The scale uncertainties in the transverse momentum measurements are 10 MeV for P x and 30 MeV for P y . Further details of the analysis of the FPS resolution and scale uncertainties can be found elsewhere [16]. For a leading proton which passes through both FPS stations, the track reconstruction efficiency is 48 % on average. Kinematic reconstruction The inclusive DIS variables Q 2 , x and the inelasticity y are reconstructed by combining information from the scattered electron and the hadronic final state using the following method [1]: Here, y e and y d denote the values of y obtained from the scattered electron only (electron method) and from the angles of the electron and the hadronic final state (double angle method), respectively [46,47]. The observable x P is reconstructed as: where E p is the measured energy of the leading proton in the FPS. The quantity β is reconstructed as β = x/x P . The squared four-momentum transfer at the proton vertex is reconstructed using the transverse momentum P T of the leading proton measured with the FPS and x P as described above, such that: where |t min | is the minimum kinematically accessible value of |t|. The absolute resolution in t varies over the measured range from 0.06 GeV 2 at |t| = 0.1 GeV 2 to 0.17 GeV 2 at |t| = 0.7 GeV 2 . An estimator for the momentum fraction z P is defined at the level of stable hadrons as: where M jj denotes the invariant mass of the dijet system. The cross sections are studied in terms of the DIS variables y, Q 2 , β, z P , the proton vertex variables x P and t, the jet variables P * T and η, and where P * T ,1 , η * 1 , φ * 1 and P * T ,2 , η * 2 , φ * 2 are transverse momenta, pseudorapidities and azimuthal angles of the axes of the leading and next-to-leading jets, respectively, reconstructed in the hadronic centre-of-mass frame. The indices 1, 2 stand for the two jets used in the specific analyses. Event selection The events used in the 'two central jets' and 'one central + one forward jet' analyses are triggered on the basis of a coincidence of a signal in the FPS trigger scintillator tiles and in the electromagnetic SpaCal. The trigger efficiency, calculated using events collected with independent triggers, is found to be 99 % on average and is independent of kinematic variables. DIS selection The selection of DIS events is based on the identification of the scattered electron as the most energetic electromagnetic cluster in the SpaCal calorimeter. The energy E e and polar angle θ e of the scattered electron are determined from the SpaCal cluster and the interaction vertex reconstructed in the CTD. The electron candidate is required to be in range 154 • < θ e < 176 • and E e > 10 GeV. In order to improve background rejection, an additional requirement on the transverse cluster radius, estimated using square root energy weighting [48], of less then 4 cm is imposed. The reconstructed z coordinate of the event vertex is required to be within ±35 cm of the mean position. At least one track originating from the interaction vertex and reconstructed in the CTD is required to have a transverse momentum above 0.1 GeV. The quantity (E − P z ), summed over the energies and longitudinal momenta of all reconstructed particles including the electron, is required to be between 35 GeV and 70 GeV. For neutral current DIS events this quantity is expected to be twice the electron beam energy when neglecting detector effects and QED radiation. This requirement is applied to remove radiative DIS events and photoproduction background. In order to ensure a good detector acceptance the measurement is restricted to the ranges 4 < Q 2 < 110 GeV 2 and 0.05 < y < 0.7. Leading proton selection A high FPS acceptance is ensured by requiring the energy of the leading proton E p to be greater than 90 % of the proton beam energy E p and the horizontal and vertical projections of the transverse momentum to be in the ranges −0.63 < P x < −0.27 GeV and |P y | < 0.8 GeV, respectively. Additionally, t is restricted to the range 0.1 < |t| < 0.7 GeV 2 . The quantity (E + P z ), summed over all reconstructed particles including the leading proton, is required to be below 1880 GeV. For neutral current DIS events this quantity is expected to be twice the proton beam energy. This requirement is applied to suppress cases where a DIS event reconstructed in the central detector coincides with background in the FPS, for example due to interactions between offmomentum protons from the beam halo with residual gas within the beampipe. Previous diffractive dijet DIS measurements [3,4,6] and DPDF fits [2,3,8] have been performed for |t min | < |t| < 1 GeV 2 . To compare with these results, the cross sections are extrapolated to the range |t min | < |t| < 1 GeV 2 using the t dependence measured in inclusive diffractive DIS with a leading proton in the final state [15]. Jet selection Reconstructed hadronic final state objects are used as input to the longitudinally invariant k T jet algorithm [49] using the p T recombination scheme with a jet radius of 1.0 as implemented in the FastJet package [50]. The jet finding algorithm is applied in the photon-proton centre-of-mass system (γ * p frame). The jet variables in the γ * p frame are denoted by a asterisk. In the 'two central jets' analysis, the requirements are P * T ,1 > 5 GeV and P * T ,2 > 4 GeV for the leading and nextto-leading jet, respectively. Asymmetric cuts are placed on the jet transverse momenta to restrict the phase space to a region where NLO calculations are reliable. The axes of the jets are required to lie within the pseudorapidity range −1 < η 1,2 < 2.5 in the laboratory frame. The selected event topology is similar to that in the LRG dijet data used in the DPDF fits [3,8]. This data selection is used for testing the proton vertex factorisation hypothesis and the DPDFs in processes with a leading proton in the final state. The selection of the 'one central + one forward jet' topology is motivated by the study of diffractive DIS processes in a phase space where deviations from DGLAP parton evolution may be present. The requirement of a forward jet suppresses the parton p T ordering which is assumed by DGLAP Leading Proton evolution. At least one central jet with −1 < η c < 2.5 and one forward jet with 1 < η f < 2.8, where η f > η c , are required with P * T > 3.5 GeV. In addition, the invariant mass of the central-forward jet system is required to be larger than 12 GeV to avoid the phase space region in which NLO QCD calculations are unreliable. The selection criteria for the two analyses are summarised in Table 1. The 'two central jets' data sample contains 581 events and the 'one central + one forward jet' data sample contains 309 events. Background subtraction The selected data samples contain background events arising from random coincidences of non-diffractive DIS events, with off-momentum beam-halo protons producing a signal in the FPS. The beam-halo background contribution is estimated statistically by combining the quantity (E + P z ) summed over all reconstructed particles in the central detector in DIS events (without the requirement of a track in the FPS) with the quantity (E + P z ) for beam-halo protons from randomly triggered events. The (E + P z ) spectra for leading proton and beam-halo DIS events for both dijet event topologies are shown in Fig. 2. The background distribution is normalised to the FPS DIS data distribution in the range (E + P z ) > 1880 GeV where the beam-halo background dominates. The ratio of signal to background depends on the signal cross section and is found to be considerably larger than in the inclusive diffractive DIS processes measured with the FPS detector [16]. After the selection cut (E + P z ) < 1880 GeV the remaining background amounts on average to about 5 %. The background is determined and subtracted bin-by-bin using this method. Detector simulation Monte Carlo simulations are used to correct the data for the effects of detector acceptance, inefficiencies, migrations between measurement intervals due to finite resolution and QED radiation. The response of the H1 detector is simulated in detail using the GEANT3 program [51] and the events are passed through the same analysis chain as is used for the data. The reaction ep → eXp is simulated with the RAPGAP program [19] using the RP model and the DPDF set H1 2006 Fit B as described in Sect. 3. QED radiative effects are simulated using the HERACLES [52] program within the RAPGAP event generator. In the 'two central jets' analysis the η * 2 distribution of the Monte Carlo simulation is reweighted in order to describe the experimental data. A similar procedure is applied to the η * f distribution in the 'one central + one forward jet' sample. More details of the analysis can be found elsewhere [53]. A comparison of the FPS data and the RAPGAP simulation is presented in Fig. 3 for the variables x P and |t| reconstructed with the FPS detector. The contributions of light quarks (uds) to P and R exchanges and of charm quarks to P exchange are also shown in the log 10 (x P ) distribution. Figure 4 presents the data and the Monte Carlo distribu-tions of the variables P * T ,1 , | η * | and z P for the 'two central jets' sample and of the variables P * T , η f and z P for the 'one central + one forward jet' topology. For this comparison z P is reconstructed from the scattered electron and the hadronic final state in the H1 detector. The MC simulation reproduces the data within the experimental systematic uncertainties. The average detector resolutions on the reconstructed jet variables η, P * T and z P are 7 %, 13 % and 32 %, respectively. Cross section determination In order to account for migration and smearing effects and to evaluates the dijet cross sections at the level of stable hadrons, matrix unfolding of the reconstructed data is performed [54]. The resolution and acceptance of the H1 detector is reflected in the unfolding matrix A which relates reconstructed variables y rec with variables on the level of stable hadrons x true via the formula A x true = y rec . The matrix A, obtained for each measured distribution using the RAPGAP simulation, is constructed within an enlarged phase space in order to take into account possible migrations from outside of the measured kinematic range. The following sources of migrations to the analysis phase space Fig. 2 The distribution of (E + P z ) for FPS DIS events (points with error bars) and for beam-halo DIS events (histogram) Fig. 3 The distributions of the variables x P (a) and |t| (b) reconstructed using the FPS (points with error bars) for events with two central jets. The beam-halo background is subtracted from the data. The RAPGAP Monte Carlo simulation, reweighted to describe the η * 2 distribution, is shown as a histogram. Contributions from sub-processes are illustrated in the x P distribution as areas filled with different colours Fig. 4 The distributions of the variables P * T ,1 , | η * | and z P for events with two central jets and of the variables P * T , η 2 and z P for events with one central and one forward jet (points with the error bars). The beam-halo background is subtracted from the data. The Rapgap Monte Carlo simulation is shown as histogram are considered: migrations from low Q 2 , from low y, from large x P , from low P T jets, from the single jet topology, fulfilling the P T requirements for the leading jet as given in Table 1, and in case of the 'one central + one forward jet' analysis from large η f . In order to treat the contamination of the measurement by these migrations correctly the analysis is performed in an extended phase space which includes side-bins in y rec and x true for each of the migration sources listed above. The unfolded true distribution on the level of stable hadrons is obtained from the measured one by minimising a χ 2 function defined as where χ 2 A is a measure of a deviation of A x true from the data bins y rec . The matrix V is the covariance matrix of the data, based on the statistical uncertainties. In order to avoid statistical fluctuations, the regularisation term χ 2 L is implemented into the χ 2 function and defined as χ 2 L = ( x true ) 2 . The regularisation parameter τ is tuned in order to minimise the bin-to-bin correlations of the covariance matrix V. Further details of the unfolding method can be found in [55,56]. The Born level cross section is calculated in each bin i according to the formula: where x i is the number of background subtracted events as obtained with the unfolding procedure described above, L is the total integrated luminosity and (1 + δ rad ) are the QED radiative corrections which amount to about 5 % on average. The differential cross sections are obtained by dividing by the bin width. Systematic uncertainties on the measured cross sections The systematic uncertainties are implemented into the response matrix A and propagated through the unfolding procedure. They are considered from the following sources listed below. • • The systematic uncertainty arising from the hadronic final state reconstruction is determined by varying the energy scale of the hadronic final state by ±2 % as obtained using a dedicated calibration [57]. The 2 % uncertainty of the calibration is confirmed by studies in the region of low jet transverse momenta and low photon virtuality. This source leads to an average uncertainty of the cross section measurements of 6.2 % for production of two central jets and 9.5 % for production of one central and one forward jet. • The model dependence of the acceptance and migration corrections is estimated by varying the shapes of the distributions in the kinematic variables P * T , η * 2 , η * f , x P , β and Q 2 in the RAPGAP simulation within the constraints imposed on those distributions by the presented data. The η * 2 and η * f reweightings are varied within the errors of the parameters of the reweighting function, which amount up to a factor 4. The P * T distribution is reweighted by P * T ±0. 15 , the x P distribution by (1/x P ) ±0.05 , the β distribution by β ±0.05 and (1 − β) ∓0.05 and the Q 2 distribution by log(Q 2 ) ±0.2 . For the 'two central jets' selection the largest uncertainty is introduced by the η * 2 reweighting (4 %), followed by β (2.7 %), while the reweights in x P , P * T and Q 2 result in an overall uncertainty of 2.3 %. The uncertainties for the 'one central + one forward jet' topology are 12.8 % for the η * f reweighting, followed by P * T (2.1 %), while the reweights in x P , β and Q 2 result in an overall uncertainty of 1.8 %. • Reweighting the t distribution by e ±t results in a normalisation uncertainty of 4.2 % for the extrapolation in t from the measured range of 0.1 < |t| < 0.7 GeV 2 to the region |t min | < |t| < 1 GeV 2 covered by the LRG data [3]. The uncertainty arising from the t reweighting within the FPS acceptance range of 0.1 < |t| < 0.7 GeV 2 is on average 1.4 %. The following uncertainties are considered to influence the normalisation of all measured cross sections in a correlated way: • Two sources of systematics related to the background subtraction are taken into account: the energy scale uncertainty and the limited statistics in the data sample without the (E + p z ) cut. Firstly, the beam-halo spectrum is shifted within the quoted uncertainties of the hadronic energy scale and proton energy scale. Secondly, the normalisation of the background spectrum is shifted by 1 ± 1/ N bkg , where N bkg is the number of events in the FPS data sample in the range (E +P z ) > 1880 GeV. The uncertainties from these two sources are combined in quadrature. The uncertainty of the proton beam-halo background is considered as a normalisation error and found to be 3.5 % for the production of two central jets and 1.5 % for the production of one central and one forward jet. • A normalisation uncertainty of 1 % is attributed to the trigger efficiencies, evaluated using event samples obtained with independent triggers. • The uncertainty in the FPS track reconstruction efficiency results in a normalisation uncertainty of 2 %. • A normalisation uncertainty of 3.7 % arises from the luminosity measurement. The systematic errors shown in the figures are obtained by adding in quadrature all the contributions except for the normalisation uncertainties, leading to an average uncertainty of 11 % for 'two central jets' and 17 % for 'one central + one forward jet'. The overall normalisation uncertainty of the cross section measurement obtained by adding in quadrature all normalisation uncertainties is 7 % for 'two central jets' and 6.2 % for 'one central + one forward jet'. The cross section measurement in t has a normalisation uncertainty 4.6 %. Fig. 7 The differential cross section for the production of two central jets shown as a function of Q 2 , y, log 10 (x P ) and z P . The inner error bars represent the statistical errors. The outer error bars indicate the statistical and systematic errors added in quadrature. The RP, SCI+GAL and TPR models are shown as solid, dotted and dashed-dotted lines, respectively. R denotes the ratio of the measured cross sections and MC model predictions to the nominal values of the measured cross sections. The total normalisation error of 7.0 % is not shown on the leading-logarithm approximation and parton showers. he ratios of the measured cross sections to the MC predictions show that the RP model gives a good description of the shape, but underestimates the dijet cross section by a factor of 1.5. For this comparison the reweighting with respect to the η * 2 distribution specified in Sect. 5.2 is not applied to the RP model. Since the P and R fluxes which determine the x P dependence in the RP model has been tuned to the inclusive diffractive DIS LRG data [2] the good agreement in shape of the RP model with the dijet data supports the hypothesis of the proton vertex factorisation. Both the SCI+GAL and TGP models fail to describe the data. The SCI+GAL model predicts harder spectra in Q 2 and z P and a softer spectrum in log 10 (x P ) than are seen in the data. It should be noted that the probability of soft colour in-teractions and hence the normalisation of diffractive processes in the SCI+GAL model is adjusted to the measured dijet cross section. The TGP model is in agreement with the data only at low x P but underestimates the data significantly at larger x P sub-leading contributions are expected to be large. Figure 8 shows the differential cross sections in P * T ,1 and | η * | for the data and the MC models. The shapes of these distributions are again well described by the RP model. Although the SCI+GAL model is not able to describe the differential cross sections as a function of the diffractive kinematic variables x P and z P and of the DIS kinematic variable Q 2 this model reproduces reasonably well the measurements as a function of the jet variables P * T ,1 and | η * |. Fig. 8 The differential cross section for production of two central jets shown as a function of P * T ,1 and | η * |. For more details see Fig. 7 Fig. 9 The differential cross section for production of two central jets shown as a function of t (a), the corresponding t -slope (circle) shown as a function of x P (b). The result is compared to the H1 inclusive diffractive DIS data (triangles) [16]. The error bars indicate the statistical and systematic errors added in quadrature None of the LO Monte Carlo models are able to describe all features of the measured differential cross sections. The best shape description in all cases is provided by the RP model. However, this model is a factor of 1.5 below the data in normalisation. The TGP and SCI+GAL models fail to describe the shape of the differential cross sections. The differential cross section in |t| shown in Fig. 9a is fit using an exponential form exp(Bt) motivated by Regge phenomenology. An iterative procedure is used to determine the slope parameter B, where bin centre corrections are applied to the differential cross section in t using the value of B extracted from the previous fit iteration. The final fit results in B = 5.89 ± 0.50 (exp.) GeV −2 , where the experimental uncertainty is defined as the quadratic sum of the statistical and systematic uncertainties and the full covariance matrix is taken into account in the fit. As shown in Fig. 9b, this t-slope parameter is consistent within the errors with the tslope measured in inclusive diffractive DIS with a leading proton in the final state [16] at the same value of x P . The consistency of the measured t dependence with that for the inclusive diffractive DIS cross sections supports the validity of the proton vertex factorisation hypothesis. The cross section for the production of two central jets can be compared with the diffractive dijet measurement obtained using the LRG technique [3]. The LRG measurement includes proton dissociation to states Y with masses M Y < 1.6 GeV. To correct for the contributions of pro-ton dissociation processes, the LRG dijet data are scaled down by a factor of 1.20, taken from the diffractive inclusive DIS measurement [16]. To compare to the results of the LRG method, dijet events are selected in the same kinematic range. The DIS and jet variables Q 2 , y, P * T ,1 and η 1,2 are restricted to the ranges 4 < Q 2 < 80 GeV 2 , 0.1 < y < 0.7, P * T ,1 > 5.5 GeV, and −1 < η 1,2 < 2, respectively. The results are presented in Fig. 10. The comparison shows consistency of the results within the experimental errors. Compared to the LRG measurement, the phase space of the present analysis extends to x P values that are a factor of three larger. 7.2 Differential cross section for the production of one central + one forward jet Figure 11 shows the differential cross sections for the production of 'one central + one forward jet' as a function of | η * |, η f and the mean transverse momentum of the forward and central jets P * T together with the expectations from the NLO QCD. Within the errors, the measured data are described by NLO QCD predictions. The NLO QCD predictions are shown with the hadronisation uncertainties and the scale uncertainties, which dominate over the DPDF uncertainties. In order to test the predictions in a wider kinematic range, the η f distribution of the forward jet shown in Fig. 11 is ex-Fig. 10 The differential cross section for the production of two central jets in the phase space of the LRG measurement [3] as described the text in Sect. 7.1. The cross section is shown as a function of log 10 (x P ). The inner error bars represent the statistical errors. The outer error bars indicate the statistical and systematic errors added in quadrature. The published LRG dijet data are scaled down by a factor of 1.20 to correct for the proton dissociation contribution are shown as open circles with the error bars indicating the statistical and systematic errors added in quadrature tended down to a minimum value of −0.6 where the prediction overshoots the data. LO QCD calculations, performed using the DPDF set H1 2007 Jets underestimate the measured cross section by a factor of about 2.5. The differential cross sections measured as a function of z P , log 10 (β) and | φ * | are presented in Fig. 12. The data are well described by the NLO QCD predictions. In the BFKL approach [58][59][60], additional gluons can be emitted in the gap between the two jets, leading to a de-correlation in azimuthal angle | φ * |. The observed agreement between the measured cross sections and NLO DGLAP predictions in this distribution shows no evidence for such an effect in the kinematic region accessible in this analysis. Figure 13 presents the differential cross sections for the production of 'one central + one forward jet' as a function of the variables P * T , | η * | and η f . in the case of 'two central jets', The RP model is a factor of 2.2 below the data which is a larger discrepancy in normalisation than that observed in the 'two central jets' sample. A similar trend is seen for the LO QCD contributions in the two samples. The normalisation of the SCI+GAL model, tuned to 'two central jets', agrees with the cross section for 'one central + The total normalisation error of 6.2 % is not shown Fig. 12 The differential cross section for production of one central and one forward jet shown as a function of z P , log 10 (β) and | φ * |. For more details see Fig. 11 Fig. 13 The differential cross section for production of one central and one forward jet shown as a function of the mean transverse momentum of two jets P * T , | η * | and η f . The inner error bars represent the statistical errors. The outer error bars indicate the statistical and systematic errors added in quadrature. The RP and the SCI+GAL models are shown as solid and dotted lines, respectively. R denotes the ratio of the measured cross sections and MC model predictions to the nominal values of the measured cross sections. The total normalisation error of 6.2 % is not shown Fig. 14 The differential cross section for production of one central and one forward jet shown as a function of z P , log 10 (β) and | φ * |. The RP, SCI+GAL and TPG models are shown as full, dotted and dashed-dotted lines. For more details see Fig. 13 one forward jet'. The shapes of the distributions are reasonably well described by both the RP and SCI+GAL models. The differential cross sections in z P , log 10 (β) and | φ * | are shown in Fig. 14. The shapes of all distributions are well described only by the RP model. As for the case of the 'two central jets' the SCI+GAL model is not able to describe the distributions of the diffractive kinematic variables but it well reproducing the shape of the | φ * | distribution. The TGP model completely fails again to describe the z P spectrum. Summary Integrated and differential cross sections are measured for dijet production in the diffractive DIS process ep → ejj X p. In the process studied, the scattered proton carries at least 90 % of the incoming proton momentum and is measured in the H1 Forward Proton Spectrometer. The presented results are compatible with the previous measurements based on the LRG method and explore a new domain at large x P . Dijet cross sections are measured for an event topology with two jets produced in the central pseudorapidity region, where DGLAP parton evolution mechanism is expected to dominates, and for a topology with one jet in the central region and one jet in the forward region, where effects of non-DGLAP parton evolution may be observed. NLO QCD predictions based on the DGLAP approach and using DPDFs extracted from inclusive diffraction measurements describe the dijet cross sections within the errors for both event topologies, supporting the universality of DPDFs. The measured t-slope of the dijet cross section is consistent within uncertainties with the value measured in inclusive diffractive DIS with a leading proton in the final state. This confirms the validity of the proton vertex factorisation hypothesis for dijet production in diffractive DIS. The measured cross sections are compared with predictions from Monte Carlo models based on leading order matrix elements and parton showers. The Resolved Pomeron model describes the shape of the cross sections well, but is too low in normalisation. This suggests that contributions from higher order processes are expected to be sizable in this approach. The SCI+GAL model is able to reproduce the normalisation of the cross section for both dijet topologies presented after tuning the model to the 'two central jets' data. The dependence of the diffractive dijet cross section on x P and z P is able to distinguish between the models. The SCI+GAL and Two Gluon Pomeron models fail to describe the shape of the distributions of the diffractive variables, while the Resolved Pomeron model describes the shape of these distributions well. maintaining the H1 detector, our funding agencies for financial support, the DESY technical staff for continual assistance and the DESY directorate for support and for the hospitality which they extend to the non-DESY members of the collaboration. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2011-11-02T18:00:49.000Z
2011-11-02T00:00:00.000
{ "year": 2012, "sha1": "19ae6aa830d4b6f7e1ff52e1a0433865c960baff", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-012-1970-9.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1a66f7a1c65ba6edc57159998e7d8513d91a35da", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254590371
pes2o/s2orc
v3-fos-license
Extracellular vesicles, from the pathogenesis to the therapy of neurodegenerative diseases Extracellular vesicles (EVs) are small bilipid layer-enclosed vesicles that can be secreted by all tested types of brain cells. Being a key intercellular communicator, EVs have emerged as a key contributor to the pathogenesis of various neurodegenerative diseases (NDs) including Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis, and Huntington’s disease through delivery of bioactive cargos within the central nervous system (CNS). Importantly, CNS cell-derived EVs can be purified via immunoprecipitation, and EV cargos with altered levels have been identified as potential biomarkers for the diagnosis and prognosis of NDs. Given the essential impact of EVs on the pathogenesis of NDs, pathological EVs have been considered as therapeutic targets and EVs with therapeutic effects have been utilized as potential therapeutic agents or drug delivery platforms for the treatment of NDs. In this review, we focus on recent research progress on the pathological roles of EVs released from CNS cells in the pathogenesis of NDs, summarize findings that identify CNS-derived EV cargos as potential biomarkers to diagnose NDs, and comprehensively discuss promising potential of EVs as therapeutic targets, agents, and drug delivery systems in treating NDs, together with current concerns and challenges for basic research and clinical applications of EVs regarding NDs. Introduction Neurodegenerative diseases (NDs) are a group of disorders characterized by progressive neuronal loss associated with deposition of pathological proteins/peptides in the central and peripheral nervous systems [1]. Examples of NDs include Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD), amyotrophic lateral sclerosis (ALS), and many other NDs. The pathogeneses of NDs are complicated and far from being fully understood. However, the delivery of pathogenic molecules among diverse cellular populations and the establishment of disease-associated microenvironment have emerged as key contributors [2,3]. Although cells can interact with surrounding and distant cells through various pathways, extracellular vesicles (EVs) are one of the most powerful tools for intercellular communication [2,4,5]. The discovery of EVs is one of the most groundbreaking discoveries in cell biology over the past few decades [2,4,6,7]. EVs are the nanoscale bilipid layerenclosed vesicles that are released from most eukaryotic cells and can be found in tissues and biological fluids [8][9][10][11]. They are a heterogeneous group of cellderived membranous structures that mainly comprise exosomes and ectosomes/microvesicles (MVs) [2,4,[8][9][10][11][12]. Other types of EVs include mitovesicles [12], apoptotic bodies, and retrovirus-like vesicles [13], which are not included in this review. In the central nervous system (CNS), EVs are widely involved in brain development and regeneration by regulating cell † Xiaohuan Xia and Jialin C. Zheng this two author only shared the equal contributions *Correspondence: xiaohuan_xia1@163.com; jialinzheng@tongji.edu.cn 7 Center for Translational Neurodegeneration and Regenerative Therapy, Tongji Hospital Affiliated to Tongji University School of Medicine, Shanghai 200065, China Full list of author information is available at the end of the article fate commitment and neural plasticity [14,15], in the pathogenesis of NDs via transferring disease-associated molecules (e.g., amyloid precursor protein [APP], Tau, cytokines) [10,16], and in the neuroregeneration/ repair in NDs and acute brain damage [17]. Therefore, EVs have been considered as important pathological factors, potential therapeutic targets, and promising therapeutic agents of NDs. In this review, we summarize the pathological roles of EVs in NDs, and comprehensively discuss current progress of basic research and clinical investigations that utilize EVs as potential therapeutics and drug delivery systems for treating NDs. Biogenesis of EVs The mechanism of biogenesis is a major difference between exosomes and MVs. Exosomes are one of the smallest EVs that are also known as endosome-derived vesicles. The biogenesis of exosomes starts with the inward budding of the plasma membrane (Fig. 1). The fusion of primary endocytic vesicles forms early endosome (EE) in clathrin-or caveolin-dependent or -independent pathways [18]. EEs mature into late endosomes, also known as multivesicular bodies (MVBs). The formation and maturation of MVBs is under the regulation of various pathways including Rab5/Rab7 [19] and The biogenesis and uptake of EVs. Exosome biogenesis pathway starts with the formation of early endosome by endocytosis at the plasma membrane. ILVs are generated by inward budding of the multivesicular body (MVB, also known as late endosome) lipid bilayer membrane. MVB can fuse with the plasma membrane under the regulation of multistep processes including MVB trafficking along microtubules and docking at the plasma membrane for exosome secretion. Alternatively, MVBs fuse with lysosomes for degradation. Unlike exosomes, microvesicles are released directly by budding from the plasma membrane. EVs in extracellular space bind to the surface of recipient cells through protein-protein or receptor-ligand interactions, leading to the internalization of EVs by recipient cells through fusion or endocytosis. EV contents are then released into the recipient cells to manipulate various biological processes integral membrane protein of the lysosome/late endosome (SIMPLE) [20]. For instance, Rab5 forms a complex with Rabaptin-5 and Rabex-5, causing rapid recruitment of Rab5 effectors including the VPS34/p150 complex [19]. Rab5 is then removed from MVB membrane by the Mon1/SAND-1/Ccz1 complex, which promotes the recruitment and activation of Rab7 and of the HOPS complex (e.g., Vps11, Vps16, and Vps18) for membrane tethering and fusion [21]. In this process, intraluminal vesicles (ILVs) are formed via inward membrane budding in Rab5-and endosomal sorting complex required for transport (ESCRT)-dependent and -independent manners [19,[22][23][24][25]. In the ESCRT-dependent machinery, ESCRT-0, ESCRT-I, ESCRT-II, and ESCRT-III constitute the crossroad for recognition of proteins and membrane budding [22][23][24][25]. The ESCRT-independent mechanism has been reported in the formation of melanosomes, where Pmel17 [26] and Tetraspanin CD63 [27] contribute to ILV formation. After that, one part of MVBs (degradative MVBs) are degraded by lysosomes, and the remaining MVBs (secretory MVBs) are guided to the plasma membrane [28,29]. Triggered by Ca 2+ influx, MVBs fuse with the plasma membrane mainly through the exocytic pathway that requires a fusion machinery (SNARE [soluble N-ethylmaleimide-sensitive factor attachment protein receptor] and tethering factors), molecular switches (small Rab GTPases), cytoskeleton and its motor proteins (actin, microtubules, kinesins, myosins), and other supporting factors [4,30]. Moreover, recent evidence suggests that only certain subpopulations of MVBs fuse with plasma membrane due to the selective binding of supporting factors for plasma fusion [4,31,32]. After MVB fusion with the plasma membrane, ILVs are released into extracellular space, which are defined as exosomes. Unlike exosomes, MVs are generated through direct outward budding and fission of the plasma membrane. MVs tend to be larger in size nm) compared with exosomes, despite an overlap of the size range. The release of MVs relies on the dynamic interplay between phospholipid redistribution and cytoskeletal protein contraction [13]. In this process, ADP-ribosylation factor 6 activates phospholipase D, leading to the recruitment of extracellular signal-regulated kinase (ERK) to the plasma membrane. ERK phosphorylates and activates myosin light-chain kinase, which triggers the release of MVs. Notably, although exosomes and MVs utilize distinct pathways for biogenesis, there are many technical challenges for isolating EVs and specific types of EVs due to the small size of EVs and enrichment of contaminants with similar size and/or density as EVs in biological fluids [33,34]. Several techniques have been developed for isolating EVs (e.g., ultracentrifugation, density-gradient centrifugation, filtration, size-exclusion chromatography, modified polymer co-precipitation, and commercially available commercial kits) and separating exosomes (e.g., immunoaffinity chromatography) [35]. These techniques have their disadvantages such as low purity, high cost, insufficient homogeneity, and high labor intensity [34,36,37]. Besides, the characterization of EVs also faces many technical issues. Methodologies that characterize the sizes of EVs like the dynamic light scattering and nanoparticle tracking analyses cannot distinguish nanoscale contaminants from EVs, and bulk methods like western blotting requires a great number of EVs [34]. Inspiringly, single EV-based high-throughput analysis has been developed recently, which opens a window to fully understand the heterogeneity and complex functions of EVs [38,39]. Similar to exosomes, MVs also contain various proteins, lipids, and nucleic acids. Interestingly, MVs have been found to contain certain proteins different from exosomes. For example, MVs express CD40, selectins, integrins, and highly likely cytoskeletal proteins due to their plasma membrane origin [51]. Recent studies reported that small MVs also express CD63-like exosomes, but the expression levels of CD81 and CD9 are significantly lower in MVs versus exosomes [52]. Thus, these surface proteins have been widely utilized to distinguish between exosomes and MVs. Besides, the membranes of MVs are highly enriched in cholesterol, phosphatidylserine, and diacylglycerol [51]. Content sorting mechanisms of EVs Molecules in the cytosol are sequestrated into ILVs through various machineries. Proteins can be passively loaded and actively transferred into exosomes. ESCRT-related molecules syndecans and syntenin bind to CD63 and Alix through the LYPX9(n)L motif, therefore enhancing EV accumulation of syntenin, clathrin, Alix, CD63, heat shock proteins, and ubiquitinated proteins [53][54][55][56]. Protein sorting into exosomes is mediated by tetraspanins [57], Nedd4 family-interacting protein 1 (Ndfip1) [58], SIMPLE [20], and lipid (raft)-related mechanism in an ESCRT-independent manner [59]. For example, exosomal surface proteins CD9 and CD63 bind to CD10, premelanosome protein (PMEL), Epstein Barr virus (EBV) protein, and the latent membrane protein 1 (LMP1) to facilitate the loading of the latter into ILVs/ exosomes [27,57,60]. Similar to the situation of protein loading, exosomes are loaded with nucleic acids, especially miRNAs, through passive packaging and selective sorting mechanisms [61,62]. Growing studies have identified multiple soluble/membrane-bound RNA-binding proteins that function as "chaperones" to transfer miR-NAs into exosomes [48,62,63]. hnRNPA2B1 specifically binds to certain miRNAs with GGAG and CCCU motifs and selectively loads these miRNAs into exosomes [62]. Similarly, SYNCRIP enhances the sorting of miRNAs into exosomes by the same motif-binding mechanism as hnRNPA2B1 [63]. Ago2, a key component of the RNAinduced silencing complex (RISC), loads miRNAs into exosomes with high miRNA-binding affinity via interacting with endosomal CD63 [48]. Together, exosomes contain various bioactive cargos that are passively packaged or selectively loaded. By delivery of cargos, exosomes exert their biological functions under physiological and pathological conditions. To date, little is known about how cargos in MVs are loaded. It is highly likely that both passive packaging and selective sorting (e.g., endogenous RNA-modulated miRNA sorting [61] and CD63-mediated protein sorting [60]) act in concert to load cargos into MVs. More studies are needed to advance our knowledge on the contentsorting mechanisms of MVs. Uptake of EVs Generally, EVs deliver their cargos to target cells mainly through two ways, endocytosis and fusion with target cell membrane. Endocytosis is the dynamic internalization of cargos by cells for signaling transduction and nutrient uptake. There are at least five types or pathways of endocytosis, namely, caveolae-dependent endocytosis, clathrin-dependent endocytosis, clathrin and caveolin-independent endocytosis, micropinocytosis, and phagocytosis [64]. Accumulating evidence has demonstrated the relevance of these pathways to EV uptake [65,66]. Rat pheochromocytoma PC12 cells have been found to utilize both clathrin-dependent endocytosis and macropinocytosis for EV uptake [66]. In another type of tumor cells, colon carcinoma COLO205 cells exploit both caveolae-and clathrin-dependent endocytosis for EV uptake [67]. Moreover, EVs can be internalized into macrophages by phagocytosis, and phagocytosis is a much more efficient way for EV uptake than endocytosis [68]. Importantly, many proteins that are involved in recognition and uptake of viruses, liposomes, and nanoparticles are expressed on EVs [69]. Following studies further demonstrated that endocytosis of EVs is facilitated by receptor-ligand complexes consisting of CD9, CD33, CD62, CD81, CD106, and many other molecules [65]. It is worth noting that although ligand-receptor interaction plays an important role in the endocytosis of EVs, whether this interaction confers targeting specificity to EVs remains inconclusive, with literature providing support for both possibilities. EVs can also release their cargos in the cytosol of the recipient cells through fusion or hemi-fusion of EV and recipient cell membranes [2,4]. The fusion of EV hydrophobic lipid bilayers and recipient cell plasma membrane has been found to be mediated by fusogenic SNARE proteins, Rab family, lipid raft-like domains, integrins, and adhesion molecules [70]. However, opposite viewpoints are raised that SNARE proteins should not mediate the fusion of EVs with recipient cells as the cytosolic sides of these two membranes are in opposite orientations. Therefore, the mechanisms that mediate EV-to-recipient cell fusion remain largely unknown, and evidence has implicated EV-to-recipient cell fusion as a minor mechanism for EV uptake in physiological conditions. Interestingly, the plasma membrane of tumor cells exhibit great potential to fuse with EVs in the low-pH tumor microenvironment conditions due to the enhanced rigidity of plasma membrane and increased sphingomyelin [71]. Moreover, once activated, platelets display higher fusion capacity with EV membrane, suggesting the relevance of EV-to-recipient cell fusion to the pathogenesis of diseases [72]. Besides, EVs can modulate intracellular signaling through directly binding to the surface receptors on the recipient cell [70]. For instance, dendritic cell-derived EVs activate T lymphocytes via CD40-CD40L interaction [73] and enhance immune responses of bystander dendritic cells through binding to Toll-like receptor ligands on bacterial surface [74]. However, whether these EVs are internalized by the recipient cells through ligandreceptor interaction-mediated endocytosis remains to be clarified. Pathological roles of EVs in NDs To date, mounting literature has reported roles of EVs in the occurrence and progression of different NDs including AD, PD, ALS, and HD. Here, we discuss the contributions of EVs derived from different types of brain cells to the pathogenesis of NDs (Table 1; Fig. 2). Pathological roles of EVs in AD AD is the most common neurodegenerative disease and most common cause of dementia in the elderly [75]. The etiology of AD is not clear, which is mainly related to genetic and environmental factors like phosphorylated Tau protein (p-Tau) and amyloid-beta (Aβ) [75]. Other hypotheses/theories including neuroinflammation, gutbrain axis disorder, and metabolic dysfunction have also been proposed [75][76][77][78]. Interestingly, growing evidence has shown altered secretion and functions of EVs during the progression of AD [79], and that blocking EV release significantly mitigates AD phenotype [80], revealing the non-ignorable contributions of EVs to the pathogenesis of AD [81]. Pathological roles of neuron-derived EVs (NDEVs) in AD Neurons are electrically excitable cells that communicate with other cells via neurotransmission, and are the main component of the CNS [4]. Neurons release a great number of EVs to modulate synaptic activities and regulate cell homeostasis in physiological conditions [4,82]. The pathological roles of NDEVs in AD are receiving much attention from the scientific community. Recently, an immunoabsorption-based strategy using anti-human L1CAM antibody has been developed to purify NDEVs from blood samples [83]. NDEVs isolated from blood of AD patients demonstrate significantly increased levels of Aβ 1−42 and p-Tau, and altered lysosomal proteins, when compared with healthy donors [83,84]. However, recent studies have questioned the utility of L1CAM as a marker of NDEVs [85,86]. L1CAM is not specifically expressed in neurons, but also in oligodendrocytes in the CNS, immune cells (e.g., T cells, B cells, and monocytes), and endothelial cells. Besides, Norman et al. demonstrated that L1CAM is not associated with EVs in human CSF or plasma. Instead, ND-related proteins (e.g., soluble α-synuclein [α-syn]) in plasma can nonspecifically bind to the anti-L1CAM antibody and are isolated by L1CAM-immunocapture experiment [85]. Hence, advanced methodology is required to purify NDEVs from blood samples without contamination. In addition, altered levels of AD-related miRNAs have been found in EVs derived from human neuroblastoma SH-SY5Y cells stably expressing APP 695 Swedish mutation (SH Swe ) and mouse neuroblastoma N2a cells expressing human APP, compared to the responding controls [10,87]. These observations suggest that NDEVs facilitate the pathological spread of AD-related factors among brain cells and drive Aβ to form amyloid fibrils in the CNS [88,89]. The altered profiles of cargos also influence the biological functions of NDEVs. Compared with NDEVs isolated from the plasma of healthy individuals, NDEVs isolated from blood of AD patients exhibit significant neurotoxic effects on cultured E18 rat cortical neurons, ascertained by the reduced cell viability determined by MTT assay [90]. The NDEV-induced neuronal damage is likely mediated by the transition of pathogenic molecules of AD such as APP and toxic Aβ oligomers [10,88]. The complement system, especially the membrane attack complex (MAC), also mediates the pathological effects of NDEVs isolated from blood of AD patients, since CD59, a GPI-anchored cell membrane glycoprotein that inhibits MAC assembly, significantly reduces NDEV-induced neuronal loss [90]. In an in vitro AD model, EVs derived from SH Swe cells can be internalized by microglia, and induce acute and delayed microglial up-regulation of tumor necrosis factor-alpha (TNF-α) and other proinflammatory factors that cause neuroinflammation, through delivery of miR-155, miR-146a, miR-124, miR-21 and miR-125b to the microglia [87]. However, we recently found that N2a cells release miR-185-enriched EVs to suppress the expression of APP in recipient N2a cells in vitro, which implies an anti-Aβ deposition role of NDEVs [10]. Therefore, NDEVs exhibit both pathological and beneficial effects, suggesting dynamic changes of NDEVs during the progression of AD. Pathological roles of astrocyte-derived EVs (ADEVs) in AD Astrocytes are the most abundant glial cells in the CNS and are associated with many functions vital to CNS physiology, including blood-brain barrier (BBB) formation and maintenance, neuroplasticity, neurotransmission, and metabolic regulation [91]. Astrocytes have high capacity for EV release and ADEVs have been shown to be an important contributor to ND pathogenesis [9,92]. In AD, astrocytes respond to both p-Tau and Aβ, leading to the accumulation of Aβ 42 protofibrils within astrocytes. The excessive Aβ up-regulates the expression of p-Tau, prostate apoptosis response 4 and ceramide to form giant endosomes for ADEV release in a co-culture system [93]. In contrast, Abdullah et al. reported that Aβ 1−42 inhibits ADEV release via stimulation of the JNK signal pathway in vitro [94]. Although conflicting results have been obtained regarding the effects of Aβ on ADEV secretion, ADEVs have been found to promote Aβ aggregation and interfere with Aβ uptake by neuroglia [95], leading to neuronal loss in AD cells and animal models Oligodendrocyte PD α-syn -Inducing neurotoxicity and formation of α-syn-rich Lewy bodies [136] [ 96,97]. Inhibition of Aβ formation in astrocytes by the calcium-sensing receptor signaling antagonist calcilytic NPS 2143 or blockade of exosome secretion by GW4869, a neutral sphingomyelinase2 (nSMase2) inhibitor [98], has been shown to dramatically repress the release of p-Tau-loaded ADEVs or Aβ aggregation, respectively [95]. Moreover, enzyme-linked immunosorbent assay (ELISA) has identified significantly increased levels of BACE1 and complement proteins (e.g., C3b, and C5b-C9 terminal complex) in both plasma-and CSF-isolated ADEVs [99,100]. BACE1 is a beta-secretase involved in the cleavage of APP to form Aβ peptides, and C3b and C5b-C9 complex may injure neurons directly or indirectly via enhancing microglial neurotoxicity [100,101]. These results suggest that ADEVs regulate Aβ deposition and exert neurotoxicity through transferring Aβ processing enzymes and pro-inflammatory factors in addition to p-Tau. Interestingly, ADEVs may also function as a Fig. 2 The pathological effects of EVs on NDs. In the brain, there are EVs released from brain cells (e.g., neurons, astrocytes, microglia, and oligodendrocytes) and peripheral EVs that enter the brain through the BBB. Under pathological conditions, these EVs carry pathogenic factors including proteins/peptides, coding and non-coding RNAs, and lipids that contribute to the onset and progression of NDs through facilitating the spreading and aggregation of pathogenic molecules, enhancing cell death, stimulating inflammatory responses, and disrupting the BBB negative regulator of AD progression since ultrasoundmediated ADEV release alleviates Aβ-induced neurotoxicity in vitro. Whether ADEVs exert beneficial effects on AD in vivo remains to be investigated. Pathological roles of microglia-derived EVs (MDEVs) in AD Microglia are resident immune-competent cells of the brain, which respond to exogenous and endogenous CNS insults and regulate brain development, neuronal network maintenance, and injury repair. Under pathological conditions, microglia polarize into different phenotypes to exert neurotoxic or neuroprotective functions and the simplest model defines microglial polarization into two main phenotypes: classic M1 activation (pro-inflammatory) and alternative M2 activation (anti-inflammatory) [102]. Transcriptome studies at the single-cell level further indicate that the M1/M2 paradigm is inadequate to summarize microglial phenotypes, since microglia rarely exhibit a significant bias toward either M1 or M2 phenotype in vivo [103]. Instead, distinct microglia subtypes have been identified in physiological and pathological conditions, which reflect the innate dynamic nature of tissue monocytes [104][105][106]. Thus, microglial polarization is multidimensional with extensive overlap in gene expression rather than a simplified linear spectrum [105]. The activation of microglia, irrespective of the particular polarized state, has emerged as the driving force of neuroinflammation in NDs [107][108][109]. In this process, microglia release a great number of EVs to mediate delivery of pathogenic molecules, to regulate the functions and viability of brain cells, and to facilitate the establishment of disease-related microenvironment, suggesting an important role of MDEVs in the pathogenesis of NDs [110]. In AD, MDEVs have been found to directly transfer classic AD pathogenic factors including Aβ and tau among cells. Extracellular Aβ 42 protofibrils can be internalized by microglia and then trafficked into MDEVs [111][112][113]. Moreover, MDEVs have been reported to strongly increase Aβ neurotoxicity through promoting Aβ 1-42 extracellular aggregates to form small soluble neurotoxic species via lipid components of EVs. Microglia also phagocytose and load human tau into MDEVs [80]. MDEVs thus deliver tau to neurons through non-synaptic pathways and trigger abnormal aggregation of tau, demonstrating a synergy between microglia and EVs in the spread of tau pathology in human brains [80]. Besides Aβ and p-Tau, shotgun proteomics studies have demonstrated a significant decrease in the abundance of homeostatic microglia markers P2RY12 and TMEM119, and increased levels of AD-associated factors FTH1 and TREM2 in CD11b + MDEVs isolated from cryopreserved human brain tissues of AD patients, compared with age-matched normal/low pathology cases [112]. Lipidomic analysis also showed increases in levels of cholesterol, major bis(monoacylglycerol)phosphate, and monohexosylceramide lipid species, and a significant decline in levels of docosahexaenoic acid-containing polyunsaturated lipids in AD patient brain-derived MDEVs versus controls, indicating potentially defective acylchain remodeling [112]. Inflammation and cellular senescence-related miRNAs, namely miR-28-5p, miR-381-3p, miR-651-5p, and miR-188-5p, have also been found to be enriched in AD patient brain-derived MDEVs, further suggesting the complicated mechanisms of MDEV-mediated neuroinflammation and neurotoxicity in AD [112]. Importantly, Li et al. utilized IL-4 to induce the M2 phenotype of microglia and found that EVs derived from M2 microglia restored viability and mitochondrial dysfunction of neuronal cells in vitro, and reduced Aβ deposition in vivo, suggesting the beneficial effects of MDEVs on AD. Moreover, Huang et al. reported that TREM2 on the surface of MDEVs binds to Aβ, thus changing the inflammatory environment around Aβ and facilitating Aβ phagocytosis by microglia, which suggests a MDEVmediated mechanism of microglia-Aβ crosstalk that accelerates Aβ elimination [114]. Overall, EVs have been identified as a key component of pathological microenvironment in AD as deregulation of EV release and cargo sorting significantly influences the onset and progression of AD. Inspiringly, a great number of studies has been performed and more pathological functions of EVs are highly likely to be announced shortly, making EVs and their contents a potential biomarker and therapeutic target of AD. Pathological roles of EVs in PD PD is another common ND among the elderly with the impairment of voluntary motor control evolving over time. The main pathological change of PD is the degeneration of dopaminergic neurons in substantia nigra of midbrain, resulting in significant decrease of dopamine content in the striatum [115]. Although the exact etiology and natural course of PD have yet to be fully clarified, the spreading of neuronal cytoplasmic protein α-syn with the polymorphous and fibrillar conformation by EVs has emerged as a key pathogenic factor that mediates the degeneration of dopaminergic neurons [115]. Pathological roles of NDEVs in PD Multiple groups have reported the existence of α-syn in NDEVs in an in vitro PD model, SH-SY5Y human neuronal cells with α-syn expression [116,117]. Afterwards, α-syn has been identified in L1CAM + NDEVs isolated from human blood, and NDEVs collected from the plasma samples of PD patients have significantly higher levels of α-syn compared with healthy controls [118,119]. Although anti-L1CAM-capture of NDEVs may be problematic, these findings implicate the involvement of NDEVs in the transmission of α-syn. This premise is confirmed by Danzer et al., who demonstrated efficient neuron-to-neuron transportation of α-syn oligomers to induce α-syn oligomerization in normal neurons, therefore inducing neuronal death, promoting the spreading of pathological synuclein, and enhancing the disease process [116,120]. NDEVs also transfer α-syn to microglia and impair microglial autophagy [121]. A following study showed that the sorting of α-syn into EVs is regulated by sumoylation-mediated membrane binding [122]. In addition, FGF2-triggered hippocampal NDEVs are specifically enriched in Rab8b and Rab31 that may contribute to non-motor symptoms in PD pathology including hearing loss [123]. Other PD-related proteins including the 20 S Proteasome complex (PSMA1-3, PSMA5-7, PSMB1, PSMB3, and PSMB5-6), Parkinson's disease protein 7 (PARK7), Gelsolin, Amyloid P component, Clusterin, and Stromal cell-derived factor 1 (SDF-1) are also identified in PD patient plasma-derived NDEVs [124]. The enrichment of these pathogenic proteins in NDEVs may also participate in the onset and progression of PD directly or indirectly, which requires further investigations. Moreover, multiple miRNAs including miR-19a-3p and miR-155 have been found to be overloaded into NDEVs collected from in vitro PD models and blood samples of PD patients [121,124]. miR-19a-3p and its family members in NDEVs target various transcripts including those that translate phosphatase and tensin homolog/AKT/mTOR signaling pathway components to suppress autophagy in recipient cells, and miR-155 is a key mediator of α-syn-induced inflammatory responses [121,[124][125][126]. Therefore, NDEVs facilitate α-syn aggregation and neuroinflammation via delivering PD-associated miRNAs to microglia, hence contributing to the onset and progression of PD. Meanwhile, the beneficial effects of NDEVs on PD have also been found. EVs isolated during dopaminergic neuron differentiation reduce protein levels of interleukin (IL)-6, IL-1β, TNF-α, and reactive oxygen species (ROS) in the substantia nigra of a rodent model of PD, highly likely through wnt5a-mediated neuroinflammation modulation [127]. Hence, similar to the situation in the pathogenesis of AD, NDEVs are also double-edged in the pathogenesis of PD. Pathological roles of ADEVs in PD In PD, astrocytes and microglia remove extracellular α-syn via endocytosis to avoid α-syn accumulation in neurons [48,49]. Meanwhile, α-syn uptake induces inflammatory response of astrocytes, which causes excessive release of ADEVs. Although there is evidence supporting glia-glia and glia-neuron transfer of α-syn through EVs [128], whether ADEVs contain α-syn and directly mediate the spreading of α-syn remain unknown. Moreover, although astrocytes carrying PDrelated mutant LRRK2 G2019S release comparable numbers of EVs versus normal astrocytes, the LRRK2 G2019S-ADEVs fail to provide full neurotrophic support after being internalized by dopaminergic neurons, indicating that alterations of the enrichment of ADEV cargos directly contribute to the progression of PD. miR-NAs in ADEVs are a convincing example. Shakespear et al. reported that ADEVs contain high levels of miR-200a-3p which targets the 3'-untranslated region (UTR) of Map2k4 and MKK4 mRNA, therefore inhibiting the c-Jun N-terminal kinase cell death pathway in an in vitro model of PD [129]. EVs derived from MPP (a PD-related neurotoxin)-stimulated astrocytes contain reduced levels of miR-200a-3p, resulting in absence of caspase-3 signaling inhibition and enhancement of dopaminergic neuron degeneration. Pathological roles of MDEVs in PD Investigations on the pathological effects of MDEVs on PD initiated from identification of α-syn oligomers in MDEVs. EVs obtained from microglia treated with preformed fibrils (PFF) (PFF-MDEVs) contain high levels of α-syn oligomers [130]. More importantly, α-syn oligomers have been detected in CD11b + MDEVs derived from CSF of PD patients, confirming the in vitro findings [130]. MDEVs then spread α-syn oligomers through microglianeuron α-syn transmission, leading to dopaminergic neuron degeneration and behavioral changes of mice that received stereotaxic injection of PFF-MDEVs into the striatum [130]. Moreover, α-syn induces an increase of exosomal secretion by microglia, forming a vicious cycle to exacerbate MDEV-mediated pathological spread of α-syn [131]. Besides, EVs derived from microglia stimulated by α-syn/interferon-γ (IFN-γ)/lipopolysaccharide (LPS) to mimic PD inflammatory conditions also contain high levels of MHC class II molecules and TNF-α that trigger dopaminergic neurodegeneration, indicating the complex mechanisms of MDEV-mediated onset and progression of PD [131,132]. Pathological roles of oligodendrocyte-derived EVs (ODEVs) in PD Besides the aforementioned cell types, other types of brain cells also perform vital physiological and pathological functions in the brain, particularly oligodendrocytes, glial cells that generate myelin sheaths to promote rapid neurotransmission in the CNS. Triggered by neuronal signals, myelinating oligodendrocytes secrete EVs into the extracellular space [133]. These ODEVs can be internalized by neurons, supporting axonal transport and maintenance [134]. Given the great impact of ODEVs on the homeostasis of the CNS, studies on the pathological contributions of ODEVs in PD have been carried out recently. The most recent study using a modified ELISA assay for brain-derived EVs, has demonstrated that the plasma levels of ODEVs are significantly higher in PD patients, compared with healthy controls and patients with multiple system atrophy (MSA), a synucleinopathy whose symptoms largely overlap with that of PD [135]. Similar to the cell type-specific EV immunoprecipitation approach for NDEV and ADEV isolation from human blood and CSF, Dutta et al. utilized an antibody for myelin oligodendrocyte glycoprotein (MOG) to collect ODEVs from the blood of PD patients [136]. The collected EVs contain significantly higher levels of α-syn than healthy controls, indicating ODEVs as a platform for α-syn spreading in the CNS [136]. However, our knowledge on the pathological contributions of ODEVs in PD remains seriously lacking, and extensive investigations are needed in the future. Together, numerous studies have suggested exosomes as a double-edged sword in PD. More comprehensive studies are needed to clarify the pathological and beneficial effects of exosomes on PD. Pathological roles of EVs in ALS ALS is a fatal, adult-onset neurodegenerative disease characterized by a progressive loss of motor neurons in the brain, brainstem, and spinal cord, rapidly leading to atrophy of bulbar, limb, or respiratory muscles. Although the majority of clinical ALS cases are sporadic, mutations in human copper-zinc superoxide dismutase (SOD1) and other genes have been identified in inherited cases of ALS. As a key component of pathological microenvironment, EVs have been found to play a significant role in the pathogenesis of ALS. Pathological roles of NDEVs in ALS One important contribution of NDEVs to the pathogenesis of ALS is the delivery of pathogenic factors to neuroglial cells. A recent study reported that EVs positive for SNAP25 (a synaptic marker) harvested from the brain and spinal cord tissues of an ALS mouse model contain misfolded neurotoxic SOD1 [137]. Microglial uptake of mutant SOD1-containing NDEVs induces inflammatory responses and reduces the phagocytic ability of microglia [138]. Moreover, other pathogenic factors of ALS, dipeptide repeat proteins (DPRs) and TAR DNA-binding protein-43 (TDP-43), have also been found in EVs released from spinal motor neurons derived from induced pluripotent stem cells from C9orf72-ALS patients [139]. DPR-containing NDEVs can be internalized by astrocytes and induce astrocyte toxicity, therefore causing neurodegeneration [139,140]. These observations suggest a tight association of NDEVs with progressive propagation of ALS-related pathology spreading from the CNS foci. The miRNA profiles are also significantly altered in NDEVs in plasma of ALS patients [141]. In ALS patient plasma-isolated NDEVs, 13 miRNAs were significantly up-regulated (e.g., miR-24-3p) and 17 miRNAs were significantly down-regulated (e.g., miR-150-3p), compared with controls. miR-24-3p has been identified as a neurodegeneration-related miRNA by disturbing neuroplasticity and enhancing neural damage presumably through regulating BOK and CHD5 [142,143]. In contrast, miR-150-3p has neuroprotective effects by targeting CASP2 [144]. The up-regulated neurotoxic miRNAs and down-regulated neuroprotective ones in NDEVs imply another potential mechanism of NDEV-mediated pathogenesis of ALS. Moreover, the expression levels of proteins that are involved in the regulation of synaptic membrane and axoneme are also significantly reduced in EVs collected from the cerebrospinal fluid (CSF) of ALS patients [145]. However, whether these observations are mediated by NDEVs remains unknown since the cellular origins of these exosomes are unclarified. Pathological roles of ADEVs in ALS Under ALS pathological conditions, astrocytes exhibit distinct EV secretion capacity. For example, in ALS models in vitro, astrocytes with SOD1 mutation release more EVs compared with controls [146], leading to increased effect of ADEVs on the brain microenvironment in ALS. More importantly, the content profiles of ADEVs are also significantly altered in ALS. Basso et al. reported that mutant SOD1 is packaged into ADEVs [146]. The delivery of mutant SOD1 from astrocytes to neurons via ADEVs induces selective motor neuron death in vitro. Moreover, human induced astrocytes from ALS patients carrying C9orf72 mutations release EVs lacking miR-494-3p, a negative regulator of axonal maintenance-related gene semaphorin 3 A (SEMA3A) [147]. The depletion of miR-494-3p in ADEVs therefore unlocks SEMA3A-induced motor neuron degeneration in ALS. Similar to the situation in vitro, Chen et al. showed a significant increase of IL-6 in ADEVs isolated from the plasma of sporadic ALS patients, suggesting alterations of ADEV cargos in ALS patients [148]. This finding implies an important role of the ADEV-mediated pathological spread of pro-inflammatory factors in the initiation and exacerbation of neuroinflammation, a key pathological feature of ALS. Pathological roles of MDEVs in ALS The involvement of microglia in the onset and progression of ALS is being increasingly recognized. In ALS animal models, the overexpression of mutant SOD1 drives microglial activation, autophagy impairment, and hyperexpression of pro-inflammatory factors (e.g., MFG-E8, RAGE, IL-1β, TNF-α, and iNOS), therefore reducing the capacity for mutant SOD1 elimination [149]. Consequently, microglia release excessive mutant SOD1 via MDEVs [150]. When motor neurons internalize MDEVs, the intracellular accumulation of mutant SOD1 then induces neurotoxicity and neuronal damage [149,150]. Furthermore, the levels of HMGB1, miR-155 and miR-146a are significantly increased in EVs derived from mutant SOD1-overexpressing microglia [151]. The HMGB1/RAGE axis has been reported to mediate neuroinflammation via impairing the mitophagy flux in microglia [152], and miR-155 and miR-146a have been identified as pro-inflammatory miRNAs that regulate microglial activation [153,154]. Thus, MDEVs enriched in these pro-inflammatory molecules also contribute to neuroinflammation, leading to aggravation of ALS phenotypes. Pathological roles of EVs in HD HD is a rare, progressive, and fatal hereditary ND caused by CAG expansion in the first coding exon of the HTT gene [155]. It is characterized by progressive movement dysfunction and cognitive decline, ending in death within 15-20 years after diagnosis. Elevated levels of total HTT and mutant HTT (mHTT) fragments have been reported in EVs from plasma of both pig models and HD patients compared to controls, implying the involvement of EVs in the pathogenesis of HD [156]. Pathological roles of NDEVs in HD Neurons express excessive HTT in the brains of HD patients [155,157]. EVs have been found to inherit the mRNA with an expanded CAG-repeat element from their parent cells with excessive HTT expression, although total HTT and mutant HTT fragments have not been detected in NDEVs [157]. These observations implicate that NDEVs participate in the spreading of pathogenic HTT within the brain, although conclusive evidence remains lacking. Besides, an in vitro study also suggests a role for NDEVs against HTT spreading [158]. NDEVs can transfer HTT-targeting miRNAs to HD patientderived neurons, which leads to the inhibition HTT mRNA expression in the latter, providing evidence for NDEV-dependent HTT suppression mechanisms [158]. Despite these preliminary studies in vitro, more studies are required to further clarify the pathological/beneficial roles of NDEVs in HD. Pathological roles of ADEVs in HD To date, studies that focus on the involvement of ADEVs in the pathogenesis of HD remain limited. Deep sequencing analysis of genes highly expressed in ADEVs reveals that ADEVs are responsible for promoting HD [159]. In the HD 140Q knock-in mouse model of HD, although mHTT is not identified in ADEVs, it inhibits ADEV release through suppressing the expression of αB-Crystallin (CRYAB), a heat shock protein that mediates EV secretion [160]. Furthermore, the sorting of CRYAB into ADEVs is also inhibited by mHTT, leading to neuroglial activation and neuroinflammation that cause neurodegeneration in HD. Pathological/beneficial roles of peripheral EVs in NDs Interestingly, growing evidence has implicated the involvement of peripheral EVs in the pathogenesis of neurological diseases with the discovery of crosstalk between brain and other organ systems in a "bottomup" manner including gut-brain, lung-brain, and bonebrain axes [161,162]. Intestinal epithelial cells have been reported to release EVs to induce IL-1β-mediated neuronal injury in sepsis-associated encephalopathy, which launches long-term cognitive deficits and neurodegeneration [163]. Moreover, ventilation-induced lung injury causes lung inflammation, leading to selective loading of caspase-1 into lung-derived EVs [164]. Caspase-1-enriched peripheral EVs induce microglial activation and cell pyroptosis in the brain, revealing circulating EVs as a pathogenic factor of NDs [164]. Besides, peripheral EVs have been reported with potential beneficial effects on NDs. For instance, young osteocytes, the most abundant cells in bone, secrete neuroprotective EVs to enhance cognitive function and ameliorate pathological changes in AD mice [161]. Another example is mesenchymal stem cells (MSCs) that have been widely used for production of EVs with therapeutic effects on NDs (details in this field summarized in a later section) [165][166][167]. These observations imply that endogenous MSCs may release EVs to decrease the risk of NDs or delay the progression of NDs, which is an interesting topic for future investigations. It is worth noting that there are also hints for the involvement of EVs in the pathogenesis of multiple sclerosis (MS), an autoimmune ND [168]. However, they are not discussed in this review due to the limited literature support. Besides, although out of the scope of our review, EVs also participate in acute neural damage by modulating the activation of neurotoxic microglia and astrocytes [165,169]. Overall, current evidence indicates both pathological and beneficial roles of EVs in the pathogenesis of NDs. More studies, especially in vivo ones, are urgently needed to clarify the involvement of EVs in NDs, and develop novel EV-based diagnostic and therapeutic strategies for NDs. EVs as novel biomarkers for the diagnosis of NDs Identification of biomarkers for NDs in the blood is challenging since the BBB prevents free passage of molecules between the CNS and blood compartments. Furthermore, several potential biomarkers related to the pathology of NDs are expressed in non-CNS tissues, significantly confounding their measurement in the blood. Given the pathological roles of EVs in NDs and their BBB penetration capacity, the brain-derived EVs natively possess the potential to serve as biomarkers for diagnosis of NDs. In this section, we summarize recent studies that provide evidence for utilizing EVs and their cargos as potential biomarkers for disease diagnosis (Table 2). EVs as novel biomarkers for the diagnosis of AD As accumulation of Aβ deposits and formation of neurofibrillary tangles composed of p-Tau in the brain Plasma EVs Human [190] ALS miR-10b-5p miR-29b-3p miR-146-5p miR-199a-3p, miR-199a-5p miR-151a-3p, miR-151a-5p Human [191] are major pathological hallmarks of AD, neuroimaging approaches including magnetic resonance imaging (MRI) and positron emission tomography (PET), and CSF examinations that detect Aβ (Aβ 1-42 and Aβ 1-40 ) and p-Tau, are used as the gold-standard for AD diagnosis [170]. However, the invasive nature of procedures, the associated risks, and the relatively high costs have limited their practicability. Blood-based diagnostics can overcome these disadvantages due to their non-invasiveness, lower cost, and capability of multiple sampling in large cohorts. The correlation between blood-based AD biomarkers and pathological changes in the brain has been widely investigated [171]. Scientists have isolated EVs from sera of healthy controls and AD patients, and characterized their contents via proteomic analyses [172]. They identified that four circulating EV proteins, including alpha-1-antichymotrypsin (AACT) isoform 1, complement component 9, immunoglobulin heavy constant mu Isoform 2, and keratin, type II cytoskeletal 6 A, are significantly up-regulated in AD patients compared with control individuals. Furthermore, five circulating EV proteins, including apolipoprotein C-III, beta-2-glycoprotein 1, C4b-binding protein alpha chain (C4BPα), complement C3, and immunoglobulin kappa variable 2-30 are significantly down-regulated in AD patients compared with control individuals, implying these proteins as putative biomarker candidates. The altered expression levels of two Aβ-binding proteins AACT and C4BPα, in AD patient serum-isolated EVs, were further validated in individuals from independent cohorts [172]. Besides, non-coding RNAs in peripheral EVs were found to have diagnostic potentials for AD [173]. Lugli et al. identified seven miRNAs (e.g., miR-342-3p, miR-141-3p, miR-342-5p, miR-23b-3p, miR-24-3p, miR-125b-5p, and miR-152-3p) in plasma EVs as significant predictors of AD in a machine learning model [174]. The receiver operating characteristic (ROC) curve analysis, which identifies optimal cut-off values for these miRNAs by the area under the curve (AUC), suggested excellent sensitivity of these miRNAs in plasma EV for discriminating AD patients from healthy controls (sensitivity, 81.7%). In addition, Yang et al. reported that miR-135a and miR-384 were up-regulated, while miR-193b was down-regulated in EVs isolated from AD patient sera. The combination of miR-135a, miR-193b, and miR-384 in serum-derived EVs performs better in AD diagnosis than each individual miRNA (sensitivity, 99%; specificity, 95%) [175]. Moreover, miRNAs in blood-derived EVs have been demonstrated to be predictors of AD at the asymptomatic stage (pre-AD). A multicenter study has identified a panel of miRNAs that are changed (up-regulated: miR-29c-5p, miR-143-3p, miR-335-5p, and miR-485-5p; down-regulated: miR-138-5p and miR-342-3p) in AD patients and predicted that this panel can detect pre-AD 5 to 7 years before the onset of cognitive decline (AUC = 0.88) [176]. Fotuhi et al. also found that the level of BACE1-AS lncRNA in plasmaderived EVs significantly differs between AD patients and healthy controls, and that the plasma-derived EV lncRNA BACE1-AS exhibited great diagnostic power for pre-AD (sensitivity, 75%; specificity, 100%) [177]. These findings show the possibility of utilizing circulating EV contents as a biomarker for AD before the occurrence of clinical symptoms. To further enhance the sensitivity and specificity of EV-based diagnosis, scientists have made a great effort to identify potential AD biomarkers in NDEVs, ADEV, and MDEVs isolated from plasma or serum. They demonstrated that Aβ 42/40 (AUC = 0.973) and miR-384 (AUC = 0.909) in NDEVs co-labeled with neural cell adhesion molecule (NCAM) and ATP-binding cassette transporter A1 have potential advantages in AD diagnosis [178]. In another study, miR-29c-3p in plasma NCAM/ amphiphysin 1 dual-labeled NDEVs showed a good diagnostic performance for subjective cognitive decline (AUC = 0.789) and AD (AUC = 0.927) [179]. Combination of Aβ 42 , Aβ 42/40 , Tau, p-T181-tau, and miR-29c-3p in plasma-isolated NDEVs displays even better diagnostic efficiency than each individual biomarker. More importantly, the levels of these AD biomarkers in plasmaisolated NDEVs are strongly correlated to those in the CSF, and the AD biomarkers of the two sources have comparable diagnostic power (plasma-isolated NDEVs, AUC = 0.911; CSF-isolated NDEVs, AUC = 0.901). Cha et al. showed that miR-212 and miR-132 were downregulated in AD patient plasma-derived NDEVs and could be used as potential AD biomarkers (AUC = 0.84, sensitivity = 92.2%, specificity = 69.0% for miR-212; AUC = 0.77 for miR-132) [180]. Importantly, isolation of NDEVs from plasma significantly increases the sensitivity for diagnosing AD at pre-AD stage, compared with raw plasma-isolated EVs. Fiandaca et al. found that the mean levels of total Tau, p-T181-tau, p-S396-tau, and Aβ 1−42 in NDEVs isolated from plasma or serum of AD patients were significantly higher than that of healthy donors even 1 to 10 years before they were diagnosed with AD [181]. Combination of these biomarkers in blood-isolated NDEVs displays promising potential for pre-AD diagnosis (AUC = 0.999, sensitivity = 96%), indicating the ability of NDEVs to predict AD onset and development. In addition, accumulating evidence suggests that mitochondrial dysfunction is associated with the contribution of diabetes to AD progression and may serve as a potential biomarker to diagnose AD among diabetic patients. Scientists have reported that the levels of NADH ubiquinone oxidoreductase core subunit S3 (NDUFS3) and succinate dehydrogenase complex subunit B (SDHB) are significantly lower in L1CAM + NDEVs isolated from the plasma of type 2 diabetes mellitus (T2DM) patients with AD dementia and progressive mild cognitive impairment (MCI) patients than in cognitively healthy individuals [182]. They also found that the levels of NDUFS3 and SDHB in plasma-isolated NDEV are lower in progressive MCI patients than in stable MCI patients [182]. These results indicate the promise of mitochondrial proteins in plasma-isolated NDEVs as potential diagnostic biomarkers at the earliest symptomatic stage of AD in participants with diabetes, although further studies separating NDEVs from blood samples using more reliable neuronal markers are required to validate these results [182]. Apart from the potential NDEV biomarkers, MDEV may also contain AD biomarkers. Fernandes et al. found that microglia internalize SH Swe cell-released EVs, which are enriched in miR-155, miR-146a, miR-124, miR-21 and miR-125b, and recapitulate the cells of origin [87]. Their data revealed that miR-21 is a consistent biomarker that is found not only in SH Swe cells and SH Swe -released EVs, but also in the recipient microglia and MDEVs. This study highlights miR-21 in EVs as a potential biomarker for AD [87]. EVs as novel biomarkers for the diagnosis of PD EVs and their contents have also been studied for their potential as biomarkers of PD. The plasma levels of different types of brain-derived EVs are increased in PD compared to control and MSA [135]. AUC values of the ROC curve for plasma-isolated SNAP25 + NDEVs, EAAT1 + ADEVs, and OMG + ODEVs were 0.82, 0.75, and 0.78, respectively, indicating the capability of the plasma levels of brain-derived EVs as diagnostic biomarkers for PD. Besides EVs per se, the level of α-syn in EVs remains stably increased with PD progression and is positively correlated with the severity of PD, displaying a moderate diagnostic value (AUC = 0.724, sensitivity = 76.8%, specificity = 53.5%) [183]. Moreover, the level of prion protein (PrP), a protein contributing to cognitive decline in PD patients, in plasma-derived EVs, negatively correlates with the cognitive performance of PD patients, suggesting that PrP in circulating EVs might be a potential biomarker for PD patients at risk of cognitive impairment [184]. In addition to EV proteins, Gui et al. identified down-regulated (e.g., miR-1 and miR-19b-3p) and upregulated miRNAs (e.g., let-7 g-3p, miR-153, miR-409-3p, and miR-10a-5p) in EVs isolated from CSF of PD patients versus controls [185]. Each of the differentially expressed miRNAs in CSF-derived EVs exhibits excellent to moderate diagnostic power for PD (AUC: 0.780-0.920), and a combination of miR-153 and miR-409-3p achieves an AUC of 0.990 [185]. The differential diagnosis between PD and atypical parkinsonian syndromes is difficult due to the lack of reliable, easily accessible biomarkers. Contents in serum EVs have been shown to be capable of predicting and distinguishing PD from atypical parkinsonian. Jiang et al. showed that α-syn in combination with clusterin in serum-derived NDEVs predictes and differentiates PD from atypical parkinsonism with a promising diagnostic value (AUC = 0.98) [119]. Similarly, Dutta et al. analyzed α-syn levels in serum-or plasma-derived EVs of PD patients, MSA patients, and healthy individuals. They found that α-syn levels are significantly lower in the control group and significantly higher in the MSA group compared with that in the PD group. The ratio of α-syn level in putative ODEVs to that in putative NDEVs is a particularly sensitive biomarker for distinguishing between PD and MSA (AUC = 0.902, sensitivity = 89.8%, specificity = 86.0%). Their data demonstrated that a minimally invasive blood test measuring α-syn level in circulating EVs that can be immunoprecipitated using CNS markers can distinguish between PD patients and MSA patients with high sensitivity and specificity [136]. EVs as novel biomarkers for the diagnosis of ALS To date, no definite ALS biomarkers are available. To discover efficient and accessible biomarkers for ALS, studies have been carried out to examine differentially expressed proteins in EVs between ALS and control groups utilizing blood samples from ALS patients. Among them, the level of coronin-1a (CORO1A) is 5.3-fold higher in EVs isolated from plasma of ALS patients than that in the controls [186]. CORO1A level increases with disease progression at a certain proportion in plasma of ALS patients and in the spinal cord of ALS mice. As CORO1A significantly affects ALS pathogenesis, it may be a potential biomarker for ALS [186]. Moreover, in a longitudinal study, plasma-derived EV samples collected from 18 ALS patients aged between 20 and 65 years were analyzed at baseline, and at 1, 3, 6 and 12 months of follow-up [187]. The ratio of neurofilament light chain (NFL) and phosphorylated neurofilament heavy chain (pNFH) was measured by ELISA, and that of TDP-43 was determined by flow cytometry. The ratio of TDP-43 in plasma-derived EVs significantly increased at 3-month and 6-month follow-up. When subclassifying patients into rapid-and slow-progression groups, EV NFL but not pNFH was significantly higher in the rapid-progression group at baseline and at 3-month follow-up [187], indicating NFL in plasma-derived EVs as a biomarker for disease progression. However, further studies are needed to demonstrate the diagnostic power of the aforementioned proteins. ALS-associated miRNA profiles in EVs from CSF or peripheral blood of patients have also been tested. miR-146a-5p, a miRNA involved in the regulation of synaptic plasticity and inflammatory response through inhibition of synaptotagmin1 and neuroligin1 [188], shows decreased expression in EVs from CSF of ALS patients [189]. However, its diagnostic power remains unknown. Saucier et al. sequenced miRNAs in EVs from plasma of ALS patients, and found differential expression of 22 miRNAs between ALS and controls [190]. Among these miRNAs, miR-15a-5p (AUC = 0.976, sensitivity = 92.9%, specificity = 91.7%) and miR-193a-5p (AUC = 0.844, sensitivity = 80.0%, specificity = 88.9%) show promising diagnostic value for ALS [190]. Similarly, miRNA analysis of L1CAM + NDEVs from ALS patient plasma showed deregulation of 30 miRNAs compared with healthy controls [141]. The deregulated miRNAs are involved in synaptic vesicle-related pathways, four of which are also deregulated in motor cortex tissues of ALS patients [141]. Another study using the same approach identified a potential miRNA fingerprint in L1CAM + NDEVs from plasma of ALS patients (containing miR-146a-5p, miR-199a-3p, miR-151a-3p, miR-151a-5p, and miR-199a-5p) that showed up-regulation in ALS patients compared with healthy controls, while 3 miRNAs (miR-4454, miR-10b-5p, and miR-29b-3p) were down-regulated in ALS [191]. However, the authors did not further validate L1CAM + NDEVs or determine the sensitivity and specificity of these miRNAs regarding the diagnosis of HD. EVs as novel biomarkers for the diagnosis of HD Misfolded proteins or protein aggregates are pathological hallmarks of HD as well, thus studying misfolded proteins or their regulators might be a crucial part in developing biomarkers for HD [192]. Numerous studies have shown that EVs contain mHTT, its fragments, and many other molecules to reflect disease state, which may be potential biomarkers of HD. However, till date, only few studies have analyzed EVs or their contents in seeking for HD biomarkers. Ananbeh and colleagues reported elevated total HTT levels in plasma-derived EVs of HD patients compared with control donors, as well as in HD pig models compared with control pigs, representing an important initial step towards characterization of EV contents in seeking for HD biomarkers [156]. Afterwards, EVs derived from platelets, a cell type that contains the highest level of mHTT among blood cells [193], were investigated as probable HD biomarker carriers [194]. However, no differences were found in the number of platelet-released EVs between HD patients and healthy controls, and no correlations were found for the number of plateletreleased EVs with the age, CAG repeat number, or disease stage of patients [194]. More importantly, mHTT protein is undetectable in EVs released from platelets [194], indicating that platelet-derived EVs might not be able to serve as HD biomarkers. On the other hand, while EV nonprotein contents, such as miRNAs, have been frequently studied for their potential as biomarkers for AD, PD, and ALS, little is known in HD due to the scarcity of relevant studies [192]. Hence, no significant advance has been made in utilizing EVs as potential ALS biomarkers. Together, the findings discussed above represent important contributions to the identification of EV biomarker candidates for AD, PD, ALS, and HD. More importantly, although isolation of brain cell-derived EVs can be costly, time-consuming, and labor-intensive, a large number of studies have demonstrated that contents (e.g., miRNAs) of brain cell-derived EVs isolated from blood are much more sensitive and specific, compared with blood molecules [2,195,196]. However, there are challenges that restrict the application of EVs and their cargos on the diagnosis of NDs. First, advanced technologies are required to minimize contaminants, especially in the plasma, and to clearly validate the key biological/ pathological components in EVs. Second, it remains challenging to isolate circulating EVs derived from the brain and identify specific types of brain cells [85,86], which can be overcome by discovery of more specific markers and development of more innovative separation methodologies. Third, it is important to distinguish between different subtypes of EVs, since reduction of the heterogeneity of EV samples will greatly strengthen diagnostic interpretations. Fourth, as alterations of contents of circulating EVs reflect systemic host responses, studies in large patient cohorts are necessary to clarify the power, sensitivity, and specificity of certain EV contents in the diagnosis of NDs. Therefore, further studies are needed to overcome current challenges and provide a clearer and more comprehensive picture of the utilization of EVs or their contents as standard, routine diagnostic tools for NDs in the clinic. EV-based therapeutic strategies in the treatment of NDs Cells are able to manipulate the molecular composition and function of extracellular matrix via secreting EVs to extracellular matrix [197]. EVs transmit signaling molecules through local or distal pathways [198]. Given that EVs can contain and transport toxic molecules and the relatively long-lasting stability of EV contents, EV-based therapeutic strategies are proposed in NDs treatment. As EV-mediated responses can be either disease-promoting or -restraining depending on its contents and states, EVs have been proposed as potential therapeutic targets or agents for ND treatment. Engineered EVs can deliver diverse therapeutic cargos, including short interfering RNAs, antisense oligonucleotides, chemotherapeutic agents, and immune modulators [199]. Importantly, because EVs are components of the native cellular transport system, they would not induce activation of immunogenic responses as external bioactive medications may probably do. Given these properties, engineered EVs are proposed as a potential drug delivery platform for ND therapeutics. Here we summarize recent studies that demonstrate the usage of EVs as therapeutic targets, agents, or drug delivery platforms for ND treatment in cellular and animal studies. Pathogenic EVs as targets for the treatment of NDs Due to the identification of EVs as carriers of pathological molecules during disease progression, pharmacological modification on the release of EVs that contain pathogenic cargos of ND-associated proteins is a common approach for EV-based therapy development. For instance, EVs derived from neurons and activated glial cells were found to carry Aβ, tau, and pathogenically altered miRNAs in AD [10,83,84,87,[111][112][113]. One study using a transgenic AD mouse model (5×FAD mouse) showed that reducing exosome release by GW4869 decreased total Aβ 1−42 and the number of plaques in mouse brains, suggesting that reducing exosome release might have therapeutic benefit for AD treatment [95]. Similar results were obtained in an AD in vitro model that blockage of exosome release via siRNA for sphingomyelin synthase 2 enhanced Aβ uptake by microglia and significantly suppressed Aβ deposition [89]. However, indiscriminately modifying EV release in the brain may exert undesirable side effects, thus fine-tuning on EVs derived from specific cell types or EVs altered in distinct signaling pathways is in urgent need. EVs have also been found to carry PD pathogenic cargos such as α-syn and altered miRNAs that mediate disease progression [116,117,130,131]. Studies targeting EVs in PD have revealed promising directions. Indirect modulation of EV release through restoration of the autophagy flux by inhibiting Drp1, the key regulator of mitochondria fission and fusion, attenuates α-syn propagation and aggregation [200]. This study demonstrates that limiting EV release by modulating Drp1 has therapeutic potentials to mitigate α-syn transmission and aggregation in PD, with efficacy shown in both NDEVs and MDEVs [201]. Although many studies have shown promising potential of blocking EV release in animal models of NDs, this strategy remains far from clinical practice. Due to the lack of knowledge on EV biogenesis, it is impossible to manipulate EV secretion without interrupting other biological processes in the cells. The generally accepted approach to blocking EV release is to inhibit the activity of nSMase2 by GW4869, PDDC, and other chemicals [202,203]. In addition to controlling exosome secretion, nSMase2 and its product ceramide are widely associated with other biological processes, including synaptic vesicle recycling [204], cell death regulation [205], and cell metabolic homeostasis maintenance [206]. Hence, inhibition of nSMase2 activity may inevitably cause many adverse effects. Moreover, GW4869 has been found to reduce exosome release while enhancing MV generation [98]. Without fully dissecting the heterogeneity of EVs under pathological conditions, it would be impossible and meaningless to target key subtypes of EVs with pathogenic potential for treatment of NDs. Stem cell-derived EVs as potential therapeutic agents for the treatment of NDs Utilizing EVs as potential therapeutic agents for disease treatment is another area of interest in the field. Recent studies have revealed that, after transplantation, stem cells exert their therapeutic effects by secreting EVs and other factors into the microenvironment via a paracrine mechanism. Due to the fact that crossing the BBB is a critical challenge for stem cell therapy, stem cell-derived EV-based therapeutic strategy might be particularly useful for the treatment of NDs (Table 3; Fig. 3). Mesenchymal stem cell-derived EVs as potential therapeutic agents for the treatment of NDs MSCs are the most commonly investigated stem cells for therapies due to their ability of damage repair and inflammation modulation. Mounting in vitro and in vivo studies have demonstrated promising effects of MSCs on neurological recovery, immunomodulation, and neoangiogenesis in various NDs [207]. In recent years, MSCderived EVs have attracted much attention since they exhibit similar therapeutic effects as their parental cells in treating NDs and have multiple advantages including negligible immunogenicity, more flexible administration strategies, and convenient content and surface modifications [166]. Emerging evidence has suggested that the MSC-derived EVs achieve their therapeutic effects via multiple mechanisms. MSC-derived EVs facilitate the degradation of pathogenic proteins, and have been shown to attenuate Aβ expression while increasing expression of genes related with memory and neural synaptic function in both cell and animal models of AD [208]. These alterations, in turn, elevate brain glucose metabolism and reverse cognitive dysfunctions in AD transgenic mice [208]. Katsuda and colleagues reported the existence of neprilysin (NEP), one of the most pivotal Aβ-degrading enzymes, in adipose MSC-derived EVs [209]. In cultured cells, NEPloaded EVs reduce levels of both released and intracellular Aβ in neuroblastoma cells (NBCs), demonstrating SHED PD --Co-culture 6-OHDA-treated neuron Suppress 6-OHDAinduced apoptosis [238] the beneficial significance of adipose MSC-derived EVs for AD [210]. Likewise, bone marrow MSC-derived EVs largely reverse dopaminergic neurodegeneration in a C. elegans model of PD through decreasing α-syn aggregates, suggesting that the MSC-derived EVs facilitate degradation of pathogenic proteins in PD [211]. MSC-derived EVs also exert anti-apoptosis and prosurvival effects [212][213][214][215][216]. Wei et al. demonstrated that miR-223-enriched MSC-derived EVs inhibit neuronal cell apoptosis and enhance cell migration in an AD cell model via PTEN and PI3K-Akt pathways [213]. Lee et al. also demonstrated that EVs secreted by adipose MSCs reduce β-amyloidosis and neuronal apoptosis in AD transgenic mice and enhance axonal growth in the brains of AD patients [214]. Decreased expression of p53, Bax, pro-caspase-3 and cleaved-caspase-3, and increased expression of Bcl-2 have been found following treatment with adipose MSC-derived EVs in AD transgenic mice [214]. This study reflects the pro-survival effects of MSC-derived EVs against Aβ-triggered neuronal dysfunction and neural loss [214]. In addition, recent studies have shown that pretreatment with MSC-derived EVs dampened 6-OHDA-stimulated SH-SY5Y cell apoptosis through boosting autophagy for neural protection Fig. 3 The therapeutic effects of stem cell-derived EVs on NDs.To date, the therapeutic effects of EVs derived from MSCs, NSCs, NBCs, and SHEDs have been reported in various animal models of NDs. These stem cell-derived EVs carry Aβ degradation-related enzyme (e.g., NEP) and lipids (e.g., GSLs), growth factors, neurotrophic factors, therapeutic miRNAs. The administration of stem cell-derived EVs therefore improves cognitive/ motor function, facilitates Aβ/α-syn clearance, enhances neuroprotection, suppresses neuroinflammation, promotes neuroregeneration of ND animal models [215,216]. In 6-OHDA-injected rats, transplanted EVs cross the BBB, diminish apoptosis of dopaminergic neurons, and meanwhile enhance dopamine levels in the striatum [215]. In MPTP-treated mice, Xue et al. found that MSC-derived EVs stimulate angiogenesis of human brain microvascular endothelial cells (HBMECs) following enhancing the expression of intercellular adhesion molecule-1 (ICAM-1) and restoring the 1-methyl-4-phenylpyridinium (MPP + )-induced damage to endothelial cells [217]. Indeed, MSC-derived EVs trigger HBMEC angiogenesis through up-regulation of ICAM1 by provoking the SMAD3 and P38MAPK signaling pathways [217]. Moreover, intraperitoneal injection of EVs notably increases TH + dopaminergic neurons in substantia nigra (SN) and up-regulates CD31 expression in the corpus striatum of treated mice, leading to recovery of these animals [217]. Reports also showed that the desired prosurvival effects of MSC-derived EVs are, to a large extent, mediated by various biologic molecules in MSC-derived secretomes, including SDF-1, growth factors (BDNF, VEGF and GDNF), MMP2, heat shock protein 27, and semaphorin 7a in 6-OHDA-injected rats [218]. Moreover, scientists demonstrated that adipose MSC-derived EVs play a neuroprotective role in ALS models in vitro [219]. They discovered 189 proteins in adipose MSCderived EVs that contribute to cell adhesion and negative modification of the apoptotic pathways. Further analysis revealed that the EV therapy suppresses the expression of pro-apoptotic proteins Bax and cleaved caspase-3 and conversely increases the expression of anti-apoptotic protein Bcl-2 in ALS in vitro models [219]. Intravenous and intranasal administration of adipose MSC-derived EVs in the SOD1G93A mouse model of ALS protects lumbar spinal cord motor neurons from neurodegeneration presumably through suppressing glial cell functions up to 17 weeks post-transplantation [220]. MSC-derived EVs exert neuroprotective effects through reversing brain inflammation. Wang et al. found improved cognitive behaviors / synaptic transmission and suppression of expression of pro-inflammatory iNOS after MSC-derived EV administration in AD mice, and that down-regulation of iNOS expression indeed rescues neural function impairment in vivo [221]. In addition, MSC-derived EVs up-regulate the expression of antiinflammatory factors such as IL-10 and tissue inhibitor matrix metalloproteinase 1 in activated microglia, implying an important role of MSC-derived EVs in initiating anti-inflammatory responses in AD mice [222]. Moreover, MSC-derived EVs show promising antioxidant effects in various ND models. For example, Chierchia et al. found that MSC-derived EVs elicit antioxidant effects by elevating Sirt3 expression in 6-OHDA-treated SH-SY5Y cells, which led to further neuroprotective effects in vivo [223]. Furthermore, a recent in vitro study revealed that adipose MSC-derived EVs protect NSC-34 cells that overexpress human SOD1(G93A) from oxidative stress and rescue NSC-34 cells from apoptosis, suggesting that MSC-derived EVs function as potential antioxidants in treating ALS [224]. Taken together, MSC-derived EVs have been shown to alleviate disease phenotypes in various cell and animal models through various mechanisms. However, the therapeutic effects of MSC-derived EVs have to be confirmed in clinical studies, which is currently under investigation in China and many other countries (ClinicalTrials.gov Identifier: NCT04388982). Neural stem cell (NSC)-derived EVs as potential therapeutic agents for the treatment of NDs Unlike MSCs, NSCs are a population of endogenous stem cells in the CNS that play a crucial role in the neural development and repair as they differentiate into both neurons and neurogliocytes [225]. To date, the therapeutic roles of NSC-derived EVs have been studied in various models of NDs and acute neural injury, and encouraging results have been obtained [17,226]. Importantly, studies have demonstrated that NSC-derived EVs have better effects in improving neural function recovery than MSC-derived EVs [227]. These findings suggest that the NSC-derived EVs inherit the great neurogenic/neuroregenerative potential from their parent cells, making them potential therapeutics for NDs. Till now, investigations on the therapeutic effects of NSC-derived EVs on NDs mainly focus on AD. Multiple independent groups have reported that NSC-derived EVs rescue cognitive defects in different AD animal models including 5×FAD and APP/PS1 transgenic mice [228,229]. Different pathological and molecular mechanisms of neurofunction restoration by NSC-derived EVs in NDs have been unveiled. Similar to MSC-derived EVs, NSCderived EVs may also reduce the burdens of key pathological molecules in NDs. A single injection of human NSC-derived EVs in the retro-orbital vein significantly reduces Aβ deposition in the brains of 5×FAD transgenic mice [228]. However, conflict results have been reported that the lateral ventricle injection of NSC-derived EVs does not alter Aβ burden in APP/PS1 transgenic mice [229]. Thus, the effects of therapies based on NSCderived EVs on Aβ deposition remain an open question. NSC-derived EVs also achieve their therapeutic effects through immunomodulation and neuroprotection. After intravenous injection, NSC-derived EVs inhibit activation of microglia and excessive expression of proinflammatory cytokines in AD mouse brains highly likely through the delivery of miR-124 and other inflammationregulatory miRNAs [228]. Meanwhile, NSC-derived EVs also restore the levels of memory-related synaptic proteins and improve synaptic morphology in the cortex of AD mice by promoting mitochondrial function and decreasing oxidative damage, suggesting promising neuroprotective effects of NSC-derived EVs [229]. Similar results have been obtained in 6-OHDA-induced PD model that administration of NSC-derived EVs downregulates pro-inflammatory signals and decreases the 6-OHDA-induced dopaminergic neuronal loss in vivo [230]. NSC-derived EVs also enhance neuroregeneration in 5×FAD mouse brains including increasing the NSC pool and facilitating NSC differentiation into neuronal lineage, presumably through transferring miRNAs (e.g., miR-9, and miR-21) and proteins (e.g., growth factors) to endogenous NSCs [17]. Moreover, NSC-derived EVs reverse AD-caused BBB disruption in vitro and in vivo [231]. Together with the finding that NSC-derived EVs promote angiogenesis in CNS injury [232], cerebrovascular regulation could be an important therapeutic effect of NSC-derived EVs. Notably, although NSC-derived EVs exhibit outstanding therapeutic effects, NSCs also have disadvantages including ethical/religious concerns and problematic logistics of acquiring fetal tissues, restricting mass production of EVs [233]. Our recent studies have generated NSC-like cells using cell reprogramming approach [234]. In this approach, somatic cells like fibroblasts and astrocytes are directly reprogrammed into induced NSCs (iNSCs) that exhibit comparable proliferation/renewal and multipotent differentiation capacities [235]. The iNSC-derived EVs exhibit comparable or even better performance in enhancing the proliferation, migration, and differentiation of NSCs in vitro via transferring growth factors including EGF, FGF2, and IGF [236]. The iNSCderived EVs also significantly inhibit apoptosis of NSCs induced by oxidative stress or starvation [237]. More importantly, intravenous administration of iNSC-derived EVs promotes recovery of neurofunction and neural tissue regeneration, and suppresses neuroinflammation and neuronal injury in a stroke model [17] and an AD mouse model (unpublished data), presumably through activation of the MEK/ERK signaling pathway. Therefore, both NSC-and iNSC-derived EVs display great therapeutic effects in cell and animal models of NDs, implicating the equal necessity to evaluate the potential of NSC-and iNSC-derived EVs for clinical application, compared to MSC-derived EVs. Potential therapeutic effects of EVs derived from other stem cell types on NDs Besides the aforementioned two types of stem cells, there are other types of stem cells that have been utilized as EV producer for ND treatment [238]. For instance, NBC-derived EVs have been reported to trap Aβ and facilitate Aβ internalization into brain-resident phagocyte microglia [239]. This finding demonstrates that intracerebrally administered EVs utilize membrane glycosphingolipids (GSLs) to act as Aβ scavengers and suggests a role for NBC-derived EVs in Aβ clearance in the brain [239]. Furthermore, scientists have found that EVs derived from the microcarrier-cultured stem cells from the dental pulp of human exfoliated deciduous teeth (SHEDs) suppress the 6-OHDA-induced apoptosis of dopaminergic neurons [238], probably through reducing the sensitivity of dopaminergic neurons to the 6-OHDAinduced oxidative stress [240]. Notably, EVs derived from SHEDs under standard culture conditions do not exert similar anti-apoptotic effect, indicating that culture conditions have a crucial influence on EV function. Hence, although multiple types of stem cells have been utilized to generate EVs as potential therapeutics of NDs, much more investigations are urgently required to investigate the therapeutic effects of EVs derived from more types of stem cells, to fully unravel the underlying mechanisms of alleviation of ND phenotypes by stem cell-derived EVs, and to develop a standard for preparing pharmaceutical-grade EVs in order to accelerate clinical applications of these EVs in ND treatment. Engineered EVs as a potential drug delivery platform for treatment of NDs Mounting studies have demonstrated that engineered EVs can be an effective platform for delivery of drugs [241,242]. As described previously in this review, EVs can permeate membranes including the BBB, indicating them as an effective platform for the delivery of drugs to the CNS [243,244]. Moreover, engineered EVs may also be able to target specific recipient cells for site-specific delivery [201,245], suggesting feasibility of intravenous or intranasal delivery approaches that avoid neurosurgery (Table 4; Fig. 4). Engineered EVs with modified cargos for the treatment of NDs To date, EVs have been successfully used for the delivery of therapeutically active molecules, including RNAs, proteins, and pharmaceutical compounds to the brain [241,242]. For instance, therapeutic genetic materials that could regulate gene expression have been transported to the brain by EVs to alter ND progression. Scientists have reported that therapeutic catalase mRNAs delivered by engineered EVs alleviate neurotoxicity and neuroinflammation in both cell and animal models of PD, indicating the potential value of EVs for delivery of genetic materials in therapeutic applications [246]. The therapeutic potential of EV-based siRNA delivery for NDs has also been reported. One research group systemically injected modified EVs containing α-syn siRNA into the S129D α-syn transgenic mice and found reduced mRNA and protein levels of α-syn in mouse brain [247]. Another group reported that EVs carrying hydrophobically modified siRNA to the CNS efficiently targeted mHtt mRNA in a HD mouse model [248]. Though therapeutic siRNAs can be efficiently transported by EVs to the CNS [245], their effectiveness is limited in ND treatment due to short time of efficacy and poor bioavailability in systemic [256] circulation. Therefore, scientists investigated the efficacy of EV-delivered shRNAs. They found that α-syn shRNAs delivered by EVs reduce α-syn aggregation, decrease dopaminergic neuronal death, and alleviate PD symptoms in mice [249]. These studies again support the potential of EVs as a drug delivery platform for genetic modulators, such as siRNAs and shRNAs, into the CNS for therapeutic benefits. On the other hand, EV delivery has been demonstrated to increase the stability of various RNA-based therapies for NDs, as they protect RNAs from degradation [250]. Besides siRNAs and shRNAs, miRNAs with therapeutic effects have also been loaded into EVs to treat NDs. In a rat model of AD, bilateral hippocampal injection of miR-29-enriched MSC-derived EVs alleviates the pathological impacts of Aβ peptide and improves spatial learning and memory by suppressing the expression of BACE1, suggesting an inhibitory effect of MSC-derived EVs on Aβ formation [251]. Similarly, miR-146a has been packaged into bone marrow MSC-derived EVs, which down-regulates NF-κB pathways in astrocytes and restores astrocytic activation, ultimately leading to improved synaptogenesis and amelioration of cognitive Fig. 4 The therapeutic effects of engineered EVs on NDs.EVs have been utilized as a drug delivery platform for the treatment of NDs. Therapeutic cargos including shRNA/siRNA/miRNAs that target ND-related genes, mRNAs that express Aβ degradation enzymes, and therapeutic drugs can be specifically loaded into EVs via transfection or physical strategies. Moreover, through being decorated with RVG, mannose, or PDGFA on the surface, EVs are further conferred targeting capacity to the CNS, microglia, and OPCs, respectively. These engineered EVs reach their target cells to facilitate Aβ clearance, mitigate oxidative stress, protect neuronal cells, inhibit neuroglial activation, promote remyelination, and restore BBB integrity, thus alleviating behavioral phenotypes of ND animal models deficits in AD mice [167]. Adipose MSC-derived EVs specifically loaded with miR-22 enhance the motor and memory capability of AD mice by inhibiting inflammatory factors, down-regulating pyroptosis, and improving neural survival [252]. Further studies demonstrated that MSC-derived EVs loaded with miR-188-3p exhibit antiinflammatory and anti-apoptotic effects in PD animal models through inhibiting NLRP3-induced inflammation and cyclin-dependent kinase 5-induced autophagy [253]. Besides, EVs have been modified to package natural enzymes for the clearance of pathogenic molecules in NDs. After being transfected with catalase-encoded plasmid DNA, mouse macrophages release EVs preloaded with redox catalase [254]. These EVs significantly increase the viability of 6-OHDA-pretreated PC12 cells as they decrease ROS levels in activated macrophages, suggesting that they reduce neuroinflammation by decreasing ROS in activated microglia [254]. Indeed, in vivo experiments confirmed that catalase-preloaded EVs reduce microglial activation and increase survival of dopaminergic neurons in 6-OHDA-intoxicated mice [254]. Apart from transporting genetic materials or modulators, various in vitro and in vivo studies have suggested EVs as promising vehicles for direct delivery of therapeutic agents for ND treatment. One study reported that EVs isolated from human blood and preloaded with saturated dopamine solution are able to cross the BBB for dopamine delivery into the CNS via interactions with transferrin and the transferrin receptor [255]. Another in vivo study showed that dopamine-preloaded EVs display greater therapeutic efficacy and lower toxicity than intravenously delivered free dopamine [201]. Furthermore, NSC-derived EVs have been utilized to package montelukast and bryostatin-1, two drugs for MS [256,257]. Results showed that the EV-based delivery of drugs to lesion areas in cuprizone-treated mice, an animal model of MS, protects the myelin sheath and promotes remyelination, suggesting EVs as a novel drug delivery platform with great potential for treatment of NDs [256,257]. Engineered EVs with modified surface for targeted therapy of NDs Although stem cell-derived EVs are proposed as potential treatment tools that have demonstrated beneficial efficacy in a number of NDs [258,259], further investigations and clinical trials are required to confirm the benefits of therapeutic application of EVs in NDs. Until now, mounting evidence supports that modification of EVs in order for specific targeting may hold substantial therapeutic benefits for NDs [245,[260][261][262][263][264]. Currently, the most commonly used strategy to confer brain-targeting capacity to exosomes is to conjugate the CNS-specific rabies viral glycoprotein (RVG) peptide (YTIWMPENPRPGTPCDIFTNSRGKRASNG) with an exosomal membrane protein Lamp2b [245]. Through transfection of plasmids encoding the RVG-Lamp2b constructs, cells release RVG-expressing exosomes that could be used to deliver desired molecules into the CNS. Multiple studies have utilized this approach to deliver siRNAs, shRNAs, and miRNAs into the brain through dendritic cell-and MSC-derived exosomes, and obtained outstanding outcomes in alleviating AD and PD phenotypes in vivo [167,247,249]. It is noteworthy that RVG may not help exosomes to cross the BBB directly and target the CNS through retrograde transport from peripheral nerves [261,265]. Instead, RVG may confer the BBB penetration capacity to exosomes through direct interaction with nicotinic acetylcholine receptors expressed on endothelial cells [261,266]. A similar strategy that expresses the cyclo(Arg-Gly-Asp-D-Tyr-Lys) peptide [c(RGDyK)] on exosomal surface, which has high affinity to the integrin αvβ3 in reactive cerebral vascular endothelial cells, successfully facilitates accumulation of modified exosomes in brain lesions compared to the undamaged tissue on the contralateral side of the brain in an in vivo ischemic model [262]. Moreover, scientists have bioengineered EVs to target specific types of cells in the CNS. For instance, to address the inefficient clearance of Aβ caused by abnormal lysosomal function in microglia in AD, EVs are bioengineered to target microglia by adding mannose on EV surface [267]. Mannose-expressing EVs specifically bind to mannose receptor (CD206), a microglial enriched protein, therefore enhancing the uptake of EVs by microglia. Through this approach, EVs deliver gemfibrozil to restore the lysosomal activity of microglia, accelerate lysosomemediated clearance of Aβ in microglia, and successfully improve the learning and memory ability of AD mice [267]. Similarly, PDGFA can be expressed on engineered EVs that exhibit excellent affinity to oligodendrocyte progenitor cell (OPC) surface receptor PDGFRα [257]. PDGFA-expressing EVs therefore transfer montelukast to OPCs to promote oligodendrocyte generation and myelin regeneration, resulting in mitigation of MS-like phenotypes in a cuprizone-induced demyelination animal model [257]. To summarize, EVs orchestrate various events that facilitate recovery and regeneration in neurodegenerative conditions. Many efforts have been made to improve the homing property of EVs to convey therapeutic agents to brain sites and potentiate recovery. Merging the intrinsic attributes of EVs with a targeted medicine is proposed as a novel therapeutic strategy that may exert a profound influence on the future of ND treatment. However, in view of translation into clinic, some technical challenges still need to be solved. One important challenge is how to enhance BBB penetration and targeting potential of EVs. To date, the mechanisms underlying EV crossing the BBB remain controversy and multiple theoretical routes have been proposed [268]. Based on these theories, EVs can be macropinocytosed or transcytosed into the MVBs of endothelial cells through the endocytic pathway, and then traffick from the MVB to the plasma membrane as neoformed exosomes [268,269]. EVs may also cross the BBB through the paracellular pathway when BBB integrity is disturbed under pathological conditions [269]. Thus, more studies that sort out the BBB penetration mechanisms will dramatically increase the number of EVs that reach the CNS after intravenous or even oral administration. Moreover, the cargo loading efficiency of EVs remains limited. Intrinsically packed natural molecules (e.g., proteins and nucleic acids) significantly increase the difficulty of desired cargo loading, resulting in a much lower cargo loading efficiency for these EVs than unpacked synthetic liposomes [270,271]. Besides, other technical issues like quality control due to the high heterogeneity of EVs, hard expedition towards industrial manufacturing, high cost of production and storage also impede the application of EVs for drug delivery [270]. Conclusions and future perspectives In summary, numerous studies have demonstrated a tight association of EVs with NDs including but not limited to the direct delivery of pathogenic molecules, the modulation of inflammatory responses of immune cells, the regulation of neuronal cell function and viability, and manipulation of BBB integrity. Inspiringly, the aforementioned findings have suggested great values in translational medicine. With the help of newly developed immunoprecipitation approaches, EVs derived from specific types of brain cells could be purified and certain cargos within these EVs have been reported to be outstanding biomarkers for the diagnosis and prognosis of NDs, providing novel perspectives to realize early diagnosis, a key step for effective prevention of irreversible neurodegeneration. Moreover, drugs targeting pathogenic EVs as well as EVs with therapeutic effects or drug delivery capacity have demonstrated promising therapeutic potential in cellular and animal models of NDs, including mitigating neurofunction impairment, alleviating neuroinflammation and neurotoxicity, and mitigating neurodegeneration and neuronal loss. With extensive investigations, more pathological/beneficial roles of EVs in ND pathogenesis and the underlying mechanisms could be unveiled, dramatically expanding our understanding of EVs within the CNS and shedding light on the development of EVbased therapeutic strategies for more precise diagnosis and more effective treatment of NDs.
2022-12-14T05:08:33.041Z
2022-12-12T00:00:00.000
{ "year": 2022, "sha1": "c28d2bf47eefc73eb67219002cfdca0f3cf877fe", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c28d2bf47eefc73eb67219002cfdca0f3cf877fe", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244896370
pes2o/s2orc
v3-fos-license
Comprehensive power spectral density analysis of the Fermi-LAT $\gamma$-ray light curves of selected blazars We present the results of the Fermi-Large Area Telescope (LAT) light curve (LC) modelling of selected blazars: six flat spectrum radio quasars (FSRQs) and five BL Lacertae (BL Lacs). All objects have densely sampled and long-term LCs, over 10 years. For each blazar we generated three LCs with 7, 10, and 14 days binning, using the latest LAT 8-year source catalog and binned analysis provided within the fermipy package. The LCs were modelled with several tools: the Fourier transform, the Lomb-Scargle periodogram (LSP), the autoregressive moving average (ARMA), the fractional autoregressive integrated moving average, the continuous-time autoregressive moving average (CARMA) processes, the Hurst exponents ($H$), the $\mathcal{A}-\mathcal{T}$ plane, and the wavelet scalogram. The power law indices $\beta$ calculated from the Fourier and LSP modelling are consistent with each other. Many objects yield $\beta\simeq 1$, with PKS 2155$-$304 even flatter, but some are significantly steeper, e.g. Mrk 501 and B2 1520+31. The power law PSD is indicative of a self-affine stochastic process characterised by $H$, underlying the observed variability. Several algorithms for the $H$ estimations are employed. For some objects we found $H>0.5$, indicating long-term memory. We confirm a quasi-periodic oscillation (QPO) in the PKS 2155$-$304 data with the period of $612\pm 42$ days at a $3\sigma$ significance level, but do not detect any QPOs in other objects. The ARMA results give in general higher orders for 7 days binned LCs and lower orders for 10 and 14 days binned LCs, implying temporal variations in the LCs are consistently captured by the fitted models. CARMA modelling leads to featureless PSDs. The recently introduced $\mathcal{A}-\mathcal{T}$ plane allows us to successfully classify the PSDs based on the LCs alone and clearly separates the FSRQ and BL Lac types of blazars. Introduction Blazars form a peculiar class of active galactic nuclei (AGNs). They are radio-loud objects pointing their relativistic jets towards an observer and having a non-thermal continuum along the entire electromagnetic spectrum. One of their properties is rapid variability in different energy bands, lasting from months down to minutes. Generally, blazars are split into two groups, namely flat spectrum radio quasars (FSRQs) and BL Lacertae (BL Lac) objects, based on characteristics visible in their optical spectra, wherein FSRQs have prominent emission lines and BL Lacs are featureless or with weak lines only. We aim to conduct a comprehensive analysis of the temporal properties of the light curves (LCs) of blazars in our sample, in particular constraining the shape and features of the power spectral density (PSD) to look for short-and long-lasting features. This can be used to establish the variability regions and physical processes responsible for variability and, in some cases, to estimate the black hole (BH) mass of blazars. First of all, we employ standard and well established methods to study characteristics of PSDs, such as breaks, which can point to regions responsible for variability. Subsequently, we want to verify the existence of quasi-periodic oscillations (QPO) which is defined as "concentration of variability power over a limited frequency range" [1]. This can shed additional light on the structure of blazars. The sample We analysed with a number of techniques the Fermi-LAT -ray LCs of 11 well known blazars, including six FSRQs, PKS 1510−089, 3C 279, B2 1520+31, B2 1633+38, 3C 454.3, and PKS 1830−211, and five BL Lacs, Mrk 421, Mrk 501, PKS 0716+714, PKS 2155−304, and TXS 0506+056. We performed a standard binned maximum likelihood analysis , using the F and the packages. In this analysis, we used data from the LAT 8-year Source Catalog [4FGL; 3], spanning the time range of 239557417-577328234 MET, which corresponds to ∼11 years, in the energy range of 100 MeV up to 300 GeV. We generated a set of three LCs for each blazar, using 7, 10, and 14 days binning. Only the observations with the test statistic > 25 (significance of 5 ) were taken into account. Eventually, 33 LCs were generated and then analysed. Since the fraction of missing points can reach 13%, we utilised the method of interpolation by autoregressive and moving average [MIARMA; 4]. Figure 1 presents examples of the LCs. Methodology This proceedings is based on the [2] publication and the methodology is described there in details. to investigate global components of the LCs, such as short-and long-lasting variations. We fitted three models to the generated PSDs, namely a pure power law (PL), a PL with Poisson noise (PLC), and a smoothly broken PL (SBPL). The better model was chosen based on the Akaike Information Criterion [ ; 7] which we evaluated via the difference, Δ = , − ,min , between the of the -th model and the minimal value ( ,min ). If Δ < 2, then both models are equally good and we chose the pure PL model as the adequate one since it is simpler. Wavelet scalogram [8] is a two-dimensional time-frequency representation of the energydensity map showing the temporal localisation of a frequency present in the signal and allowing us to study the local components and their time evolution. The significance testing to search for QPOs at the level ≥ 3 was employed within this method. ARMA and CARMA modelling: the autoregressive moving average process [ARMA; 9] and the continuous-time ARMA process [CARMA; 10] are stochastic processes applied to detect different types of variability in the data, to uncover QPOs, and to determine the variability-based classification of the astrophysical sources. In the case of the CARMA processes, a PSD can be composed of a number of zero-centered Lorentzians, which define breaks, while the non-zerocentered Lorentzians are used to model QPOs. Moreover, the CARMA modelling allows us to handle irregular sampling and error measurements. Hurst exponent [11] measures the statistical self-similarity of a time series. The self similarity is connected to a long range dependence, referred to as memory, of a process via the autocorrelation function. The properties of can be summarized as follows: takes values between 0 and 1; = 0.5 is for an uncorrelated process (white noise or Brownian motion); if > 0.5, than a process is persistent, i.e. exhibits long-term memory; while in the opposite case, if < 0.5, one deals with an anti-persistent (short-term memory) process. The A − T plane [12] is used to differentiate various types of coloured noise. The plane consists of the fraction of turning points (T ) to verify the noisiness of a time series and the Abbe value (A) which quantifies the smoothness of a time series. If T is asymptotically equal to 2/3, the time series constitutes a purely random time series or white noise. A process with T > 2/3 is more noisy than white noise. For T < 2/3, a process is less noisy than white noise. Results We analysed the Fermi-LAT -ray LCs of 11 blazars, six FSRQs and five BL Lacs, employing a number of techniques. We found the following results. 1. The values calculated with the Fourier spectra and the LSP are consistent with each other for the majority of blazars; however, we noticed a discrepancy for 3C 454.3. In this case, the Fourier PSD is described by a pure PL, while the LSP is fitted better by a PLC. The former PSD is flatter than the latter. The fit of the SBPL model was not competing in any of the cases under consideration. Overall, the shapes of PSDs indicate a coloured noise with 1 2, i.e. between pink and red noise. We suggest that each object can be treated as realisation of one stochastic process underlying the observed variability. 2. The only significant (≥ 3 ) QPO we found using the wavelet scalograms is the well-known QPO in PKS 2155−304, with a period of 612 ± 42 days ( Figure 2). Moreover, we noticed a QPO candidate in B2 1633+38 data, evolving from ∼ 500 days to > 1000 days, and in PKS 0716+714 at 1000 days, lasting 2 to 3 cycles only. These objects require additional observations to actually conclude whether a QPO exists in their data. We do not find significant QPOs in the studied LCs of the remaining blazars in our sample. 3. ARMA and CARMA models suggest breaks in the PSDs at time scales of a few hundred days in all blazars in the sample but 3C 454.3 and B2 1520+31. We searched for the best CARMA( , ) model with 1 7 and 0 6, < . We obtained the orders (1, 0) or (2, 1) for the majority of cases. In general, FSRQs are described with the former model, while the latter represents BL Lacs. We do not observe any additional features in the data of the analysed objects. 4. The Hurst exponents are > 0.5 for the majority of BL Lacs in our sample, indicating the presence of long-term memory. The FSRQs swing back and forth between 1 and 0. Only 3C 454.3 keeps < 0.5, being an anti-persistent system. Also Mrk 421 and PKS 0716+714 are oscillating in the entire range of the limit. This evolution behaviour does not allow us to formulate an unambiguous claim about the persistence of these objects. 5. The FSRQs are characterised by lower values of A than BL Lacs and these two classes of blazars are clearly separated on the A − T plane (Figure 3). This was earlier discovered by [13] in the I-band optical LCs of blazars and blazar candidates behind the Magellanic Clouds [14]. This finding shows that the flux changes are different for the two blazar classes, thus they should be driven by different physical mechanisms or take place in different blazar components. The separation allows us to distinguish blazar classes based on LCs without including multiwavelength, polarimetric, and spectroscopic properties. Furthermore, the location in the A − T plane indicates properties in the structure of LCs, which are not revealed by other methods used in this work. Conclusions In our research, we considered a stochastic description to model the variability of blazars. All blazars in our sample are characterized by long timescales consistent with a conclusion that their variability originates in the accretion disk. The timescales also point out the physical processes responsible for -ray production, i.e. external Compton in the case of FSRQs and synchrotron self-Compton for the BL Lacs. The detailed elaboration on results and conclusions is presented in [2].
2021-12-06T02:15:08.424Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "349415265e2634f9899d298b59d0fde7ed7c36b9", "oa_license": "CCBYNCND", "oa_url": "https://pos.sissa.it/401/008/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "349415265e2634f9899d298b59d0fde7ed7c36b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18360578
pes2o/s2orc
v3-fos-license
MiR-26 down-regulates TNF-α/NF-κB signalling and IL-6 expression by silencing HMGA1 and MALT1 MiR-26 has emerged as a key tumour suppressor in various cancers. Accumulating evidence supports that miR-26 regulates inflammation and tumourigenicity largely through down-regulating IL-6 production, but the underlying mechanism remains obscure. Here, combining a transcriptome-wide approach with manipulation of cellular miR-26 levels, we showed that instead of directly targeting IL-6 mRNA for gene silencing, miR-26 diminishes IL-6 transcription activated by TNF-α through silencing NF-κB signalling related factors HMGA1 and MALT1. We demonstrated that miR-26 extensively dampens the induction of many inflammation-related cytokine, chemokine and tissue-remodelling genes that are activated via NF-κB signalling pathway. Knocking down both HMGA1 and MALT1 by RNAi had a silencing effect on NF-κB-responsive genes similar to that caused by miR-26. Moreover, we discovered that poor patient prognosis in human lung adenocarcinoma is associated with low miR-26 and high HMGA1 or MALT1 levels and not with levels of any of them individually. These new findings not only unravel a novel mechanism by which miR-26 dampens IL-6 production transcriptionally but also demonstrate a direct role of miR-26 in down-regulating NF-κB signalling pathway, thereby revealing a more critical and broader role of miR-26 in inflammation and cancer than previously realized. INTRODUCTION MiR-26 exhibits tumour suppressor activity (reviewed in (1)) and has emerged as a key regulator in carcinogenesis and tumour progression. Ectopic expression of miR-26 inhibits proliferation, induces apoptosis and/or decreases tumourigenicity in multiple cancers, whereas down-regulation of miR-26 was observed across multiple tumour types (2)(3)(4)(5). An inverse relationship between levels of miR-26 and Interleukin-6 (IL-6) was observed in some tumour cells (6,7). It has been thought that miR-26 regulates inflammation and tumourigenicity largely through down-regulating IL-6. The mechanisms for miR-26 actions in regulating IL-6 production, inflammation and tumour proliferation remain obscure. Previously, a potential miR-26 recognition site was predicted in the 3 UTR of IL-6 mRNA (17,18). Binding of miR-26 to this site was proposed to elicit rapid degradation of IL-6 mRNA and thus silence IL-6 expression in human alveolar basal epithelial A549 cells activated by TNF-␣ (18). However, the region containing this site has been reported to have little effect on IL-6 mRNA levels in monkey and mouse cell models (19). Moreover, when the predicted miR-26 site in the 3 UTR of IL-6 mRNA was mutated, it had no effect on the translation of IL-6 in HeLa cells (20). These observations argue against a direct action of miR-26 on silencing the IL-6 message. Given that inflammation is a major factor contributing to malignancy and the roles of miR-26 and IL-6 in this process, it is important to understand the mechanism by which miR-26 regulates IL-6 production in the context of cellular inflammatory response. In this study, we employed a variety of approaches to elucidate the mechanism underlying miR-26-mediated regulation of IL-6 production. Our results demonstrated that miR-26 does not directly target IL-6 transcript for rapid decay or translational repression in either human bronchial epithelial BEAS-2B or adenocarcinomic alveolar basal epithelial A549 cells. Rather, miR-26 down-regulates production of IL-6 via actions on NF-B signalling. Our data further revealed that miR-26 represses IL-6 transcription through silencing the expression of MALT1 and HMGA1, two proteins with critical functions in mediating NF-B signalling and tumourigenicity (21)(22)(23)(24), in BEAS-2B cells. Moreover, we discovered an inverse relationship between levels of miR-26 and of HMGA1 or MALT1 transcripts in lung adenocarcinoma (LUAD), which is linked to LUAD patient survival. Our results not only identify a novel mechanism by which miR-26 dampens IL-6 production transcriptionally through down-regulating NF-B signalling pathway but also point to a direct and broader role for miR-26 in inflammation and malignancy. Plasmids Renilla luciferase (RL) reporter gene driven by human GAPDH promoter (pLightSwitch-Prom-GAPDH) was purchased from SwitchGear Genomics. Firefly luciferase (FL) reporter gene driven by a minimal promoter containing an NF-B response element (pGL4.32[luc2P/NF-B-RE/Hygro]) was purchased from Promega. To construct pIL-6-FL, a 2.2-kb fragment carrying the human IL-6 promoter that contains the transcription elements described previously (25)(26)(27) was PCR-amplified using genomic DNA purified from BEAS-2B cells and inserted into pGL4. 13 To create pRL-IL-6 3 UTR, pRL-IL-6 5 UTR or pRL-IL-6 ORF, the corresponding regions from a human IL-6 cDNA were inserted into the RL 3 UTR in psiCHECK2 (Promega). To construct pRL-3×26(IL-6) and pRL-3×26(GW182), DNA fragments containing three copies of the putative miR-26 recognition site (see Supplementary Figure S3A and B for the sequences) were synthesized (Integrated DNA Technologies) and inserted into the RL 3 UTR in psiCHECK2. The plasmid pRL-IL- 6 3 UTR( 26) was created by a PCR-based mutagenesis to specifically truncate the seed-binding region of the predicted miR-26 site. Plasmids carrying sequences coding for the ORF of HMGA1 or MALT1 (pIRES-HMGA1 and pcDNA-FLAG-MALT1, respectively) were purchased from Addgene. Cell culture, transfection and dual luciferase assay The human bronchial epithelial cell line BEAS-2B and the adenocarcinomic human alveolar epithelial A549 cell line were purchased from ATCC and cultured as instructed by the manufacturer. Cells were seeded following the first transfection of miRNA mimics and incubated for 1 day before the second transfection with both miRNA mimics and plasmid DNA (for BEAS-2B cells) or with only plasmid DNA (for A549 cells). After the second transfection, cells were incubated for 1 day before TNF-␣ treatment. For TNF-␣ stimulation, cells were cultured for 6 h (BEAS-2B) or 1 h (A549) in medium containing 50 ng/ml of TNF-␣ (18,28). For dual luciferase assays, BEAS-2B cells were transfected with 80 ng of plasmid(s) per 10-cm plate using X-treme GENE9 (Roche) and A549 cells were transfected with 100 ng of plasmid(s) per 6-cm plate using Lipofectamine 2000 (Invitrogen) following the manufacturers' protocols. Transfections of miRNA mimics, miRNA antagomirs (inhibitors) and siRNAs were carried out using Lipofectamine-RNAiMAX (Invitrogen) according to the manufacturer's protocol. MicroRNA and control mimics were purchased from Sigma, including miR-26a (HMI0415), miR-26b (HMI0419) and control mimics (HMC0002). The miR-26 antagomir (Cat #: 4464084) and control inhibitors (Cat #: 4464076) were purchased from ThermoFisher Scientific. Small interference RNAs (siRNAs) were purchased from Sigma and GE Dharmacon including the following: negative control siRNA (SIC001, Sigma), human HMGA1 siRNAs (#1, SASI Hs01 00186717, Sigma; #2, SASI Hs01 00134689, Sigma) and human MALT1 siRNAs (#1, SASI Hs02 0042305, Sigma; #2, J-005936-06-07-08-09, SmartPool from GE Dharmacon). For transfection with both small RNA (i.e. miRNA mimic or siRNA) and plasmid DNA, cells were transfected with small RNAs first and then were transfected with plasmid DNA with or without small RNAs the next day. Transfected cells were harvested 42-48 h after transfection for dual luciferase assays, western blot analysis and/or RNA extractions. For dual luciferase assays, the renilla and firefly luciferase activities in cell lysates were analysed using a Dual-Glo R Luciferase Assay System (Promega) according to the manufacturer's protocol, and luminescence was scanned and recorded with a Tecan Infinite R 200 Microplate Reader (Tecan Trading AG, Switzerland). RNA extraction and real-time PCR analysis Total RNA was extracted using Trizol (Invitrogen) or RNeasy Mini Kit (QIAGEN) following the manufacturer's protocols. For measurements of mRNA decay in BEAS-2B cells, transcription was blocked by actinomycin D (ActD; 5 g/ml) and cells were harvested for RNA preparation at various subsequent time points. Real-time quantitative RT-PCR was performed using 1 g of total RNA in a 10 l reverse transcription reaction containing 50 units of Multi-Scribe reverse transcriptase (Applied Biosystems). The reaction was incubated at 37 • C for 120 min followed by an incubation at 25 • C for 10 min and then at 85 • C for 5 min. After reverse transcription, 10 l PCR reactions containing 1X TaqMan Gene Expression Assay (Applied Biosystems), which has premixed TaqMan MGB probes and primers, 1X TaqMan Universal Master MixII (Applied Biosystems), which has DNA polymerase, dNTP, salt and buffer, and 25-50 ng of cDNA were performed using the LightCycler 384 Real-time PCR system (Roche) according to the manufacturer's protocol. Half-lives of mRNAs were determined by least squares regression of each time point data set to a oneexponential decay equation (29). The mRNA levels of NF-B-responsive genes in BEAS-2B cells were analysed using the Human NFB Signaling Targets RT 2 PCR Profiler Array (Qiagen) according to the manufacturer's protocol. Data analysis was performed using Qiagen online software (http:// pcrdataanalysis.sabiosciences.com/pcr/arrayanalysis.php). RNA-sequencing We employed whole transcriptome RNA-seq to investigate changes of TNF-␣ responsiveness in response to alterations of miR-26 levels in a genome-wide fashion. RNA samples from BEAS-2B cells transfected with miR-26a mimic, control mimic, miR-26 antagomir or control antagomir, with or without TNF-␣ stimulation, were sequenced by HiSeq 2000 (Axeq Technologies). The resulting 100×2 paired-end RNA-seq reads were aligned to the human genome (hg19) and their abundance estimated using TopHat (v1.3.3). We verified the quality of the sequencing with FastQC. More than 12 900 human RefSeq genes can be detected through RNA-seq with expression levels more than 1 FPKM (fragments per kilobase of transcript sequence per million mapped paired-end reads) for each of the eight RNA samples. Additional details of the analysis of RNA-seq datasets can be found in the main text. Raw RNA-sequencing data have been deposited in the Gene Expression Omnibus under accession number GSE70831. All the TCGA-related analyses using Level 3 Processed RNAseq files were downloaded from Broad Firehose database (http://gdac.broadinstitute.org/). The 3 UTR of IL-6 mRNA does not contain a functional miR-26-responsive site for gene silencing in human BEAS-2B or A549 cells To investigate the mechanism by which miR-26 regulates IL-6 production in the context of airway inflammation, we chose for the study a widely used human bronchial epithelial cell line BEAS-2B (e.g. see (28,(31)(32)(33)). Consistent with our previous observation (28), TNF-␣ significantly increased IL-6 expression in BEAS-2B cells (Supplementary Figure S1A). As the miR-26 family includes miR-26a and miR-26b (34) and both are predicted to recognize target sites containing the same complementary seed region, we checked both for their effects on IL-6 expression in BEAS-2B cells. The results showed that either miR-26a or miR-26b mimic significantly reduced IL-6 mRNA levels in TNF-␣-activated BEAS-2B cells (Supplementary Figure S1B) and IL-6 protein levels in the culture medium (Supplementary Figure S1C) to similar extents. We then tested whether the predicted miR-26 recognition site in the 3 UTR of the IL-6 mRNA is required to repress IL-6 production. Besides the predicted miR-26 recognition site, the IL-6 3 UTR contains several known or potential RNA destabilizing elements ( Supplementary Figure S2), including two AU-rich elements (AREs), an upstream region that carries an endonuclease cleavage site recognized by ribonuclease ZC3H12A (Regnase I) and a potential let-7 binding site (19,35,36). As with many cytokines and chemokines, IL-6 production can be effectively downregulated through the ARE-mediated mRNA decay pathway (19). For our test, we first introduced the entire IL-6 3 UTR into the 3 UTR of an RL reporter to create the RL-IL-6 3 UTR construct and tested the activity by dualluciferase assay. Activity from FL mRNA served as an internal control for normalization. The results showed that the luciferase activity from the RL-IL-6 3 UTR reporter (which contains several potential RNA destabilizing elements) was dramatically lower than the RL 3 UTR control in BEAS-2B cells ( Figure 1A). We then truncated either the seed region of the predicted miR-26 recognition site or an ARE in the IL-6 3 UTR to create the RL-IL-6 3 UTR( 26) or RL-IL-6 3 UTR( AREI) construct. The dual-luciferase assay showed that the activity derived from the RL-IL-6 3 UTR( AREI) reporter is appreciably higher than that from the reporter carrying IL-6 3 UTR ( Figure 1A), consistent with a gene silencing function of the ARE (19). On the other hand, the activity derived from the RL-IL- 6 3 UTR( 26) reporter is similar to that from the reporter carrying IL-6 3 UTR ( Figure 1A). Thus, the predicted miR-26 recognition site does not contribute to the activity of IL-6 3 UTR to down-regulate IL-6 level. We then tested whether the predicted miR-26 recognition site in IL-6 3 UTR, albeit is dispensable, might be responsive to miR-26 for gene silencing function. We introduced three consecutive copies of the site (Supplementary Figure S3A) into the RL reporter to create the RL-3×26(IL-6) construct and transfected it into BEAS-2B cells with miR-26 mimics or a negative control mimic. For a positive control, we introduced three consecutive copies of a miR-26 recognition site found in human TNRC6A (37), also known as GW182 (Supplementary Figure S3B), into the RL 3 UTR to create RL-3×26(GW182). The dualluciferase assay showed significantly lower activity from the RL-3×26(GW182) transcript in the presence of either miR-26a or miR-26b mimic than in the presence of the control mimic ( Figure 1B, C, and Supplementary Figure S4A). In contrast, the respective luciferase activities derived from the RL 3 UTR, RL-IL-6 3 UTR, RL-IL-6 3 UTR( 26) and RL-3×26(IL-6) transcripts in the presence of miR-26 mimics were similar to the corresponding activities in the presence of a control mimic ( Figure 1B, C, and Supplementary Figure S4A). Taken together, these results indicate that the IL-6 3 UTR lacks a miR-26-responsive site in BEAS-2B cells. Histogram showing that the predicted miR-26 recognition site in the IL-6 3 UTR is dispensable for silencing function. RL activities in BEAS-2B cells expressing RL reporter mRNA carrying IL-6 3 UTR, IL-6 3 UTR with miR-26 seed region truncated (IL-6 3 UTR( 26)) or IL-6 3 UTR with deletion of an ARE (IL-6 3 UTR( AREI)) were detected by dual-luciferase assay. FL activity derived from the same plasmid carrying the RL gene served as a control for normalization. The RL/FL activity detected in cells expressing the RL 3 UTR control mRNA was set as 1. All data represent the mean ± standard error (n = 3). (B-E) Histograms showing relative changes of RL activity derived from the indicated reporter mRNAs in TNF-␣-stimulated BEAS-2B (B and C) or A549 (D and E) cells in the presence of miR-26a (light blue bars), miR-26b (light green bars), or a control miRNA mimic (dark blue bars). The RL reporter mRNA carrying three copies of the predicted miR-26 recognition site from human GW182 (3×26(GW182)) served as a positive control. FL activity, derived from the same plasmid carrying the RL reporter gene, was used for normalization. The relative silencing effects were measured by comparing the RL/FL activity detected in cells expressing each reporter mRNA as indicated in the presence of miR-26 mimics with that detected in cells expressing the corresponding reporter mRNA in the presence of the control miRNA mimic (set as 1, representing no silencing effect). All data represent the mean ± standard errors (n = 3). T-test was done to assess statistical significance.***P < 0.001; ****P < 0.0001. Since a previous study suggested that miR-26 directly targets the predicted miR-26 recognition site in IL-6 3 UTR in A549 cells (18), we also looked for silencing function of the IL-6 3 UTR, IL-6 3 UTR( 26) or 3×26(IL-6) in response to miR-26 mimics or a control mimic in A549 cells. The dual-luciferase assay ( Figure 1D, E, and Supplementary Figure S4B) showed that the activity derived from the positive control RL-3×26(GW182) transcript was effectively repressed by miR-26 mimics, whereas the activities derived from the reporters carrying the IL-6 3 UTR, IL-6 3 UTR( 26) or 3×26(IL-6) were similar with miR-26 mimics and control mimic in A549 cells. Thus, the predicted miR-26 recognition site in IL-6 3 UTR does not respond to miR-26 mimics in either BEAS-2B or A549 cells. We conclude that the 3' UTR of the IL-6 mRNA does not contain a functional miR-26-responsive site. MiR-26 does not directly target IL-6 mRNA per se for gene silencing We then considered whether miR-26 might exert gene silencing effects by targeting an unexpected recognition site in other regions of the IL-6 transcript. We introduced the 5 UTR or the open-reading frame (ORF) of IL-6 into the 3 UTR of RL mRNA and evaluated the corresponding luciferase activities in BEAS-2B cells co-transfected with the miR-26a mimic or a control mimic. The dual-luciferase assay ( Figure 2A) showed that while miR-26a mimic appreciably repressed the luciferase activity derived from the control RL-3×26(GW182) transcript, it had little effect on the activities from the transcripts containing either IL-6-5 UTR or IL-6-ORF. These results indicate that the entire IL-6 transcript lacks a sequence suitable for direct interaction with miR-26 to exert a gene silencing function. As mammalian miRNAs silence their direct targets largely through eliciting rapid degradation of their mR-NAs (38)(39)(40), to further corroborate the above findings, we checked whether miR-26 can promote IL-6 mRNA degradation. We performed time-course experiments using actinomycin D to block transcription in BEAS-2B cells activated by TNF-␣. The results showed that while the miR-26a and miR-26b mimics significantly reduced the stability of the positive control mRNA, MAP kinase 6 (MAPK6) transcript ( Figure 2B), neither of them had a destabilizing effect on the IL-6 mRNA ( Figure 2C). Collectively, we conclude that miR-26 does not directly target the IL-6 transcript for gene silencing. MiR-26 down-regulates IL-6 production through dampening IL-6 transcription activated by TNF-␣/NF-B signalling The steady-state level of a cytoplasmic mRNA represents a balance between its biogenesis in the nucleus and its degradation in the cytoplasm (29). In light of the finding that miR-26 does not directly target the IL-6 transcript for mRNA degradation (Figures 1 and 2), we tested whether miR-26 down-regulates the IL-6 level by repressing IL-6 transcription activated by TNF-␣. We constructed a human IL-6 promoter-driven FL reporter gene (IL-6-FL) containing the transcription elements described previously (25)(26)(27). Results from dual luciferase assays with this reporter (Figure 3A) showed that the human IL-6 promoter responds strongly to TNF-␣ treatment, giving a ∼15x induction of luciferase activity in the presence of the control miRNA mimic ( Figure 3A), which directly parallels the level of TNF-␣ induction seen from the endogenous IL-6 gene in BEAS-2B cells (Supplementary Figure S1A). In the presence of the miR-26a mimic, the induction of luciferase activity derived from IL-6-FL was only ∼5x, one-third of the induction in the presence of the control mimic. Moreover, the steady-state level of endogenous IL-6 mRNA in the presence of the miR-26a mimic was ∼35% (i.e. also about one-third) of the level in the presence of the control mimic ( Figure 3B). In contrast, there was little induction of luciferase activity derived from a negative control RL reporter gene driven by human GAPDH promoter (GAPDH-RL) ( Figure 3C), and the GAPDH promoter activity was the same with the miR-26a mimic as with the control mimic ( Figure 3C). These results indicate that miR-26 down-regulates the TNF-␣-induced IL-6 promoter activity. Since TNF-␣ can activate NF-B, a transcription factor that plays an important role in TNF-␣-mediated activation of many genes encoding cytokines and chemokines, including IL-6 (25,41), we tested whether miR-26 can compromise TNF-␣-mediated activation of NF-B signalling. We used the dual luciferase assay to measure the effect of the miR-26a mimic on the activity of an FL reporter gene driven by a minimal promoter containing a copy of the NF-B response element. With the control mimic, TNF-␣ stimulation gave a ∼250x induction of luciferase activity from this reporter ( Figure 3D). In the presence of miR-26a mimic, the TNF-␣ induction of activity driven by this NF-B-responsive promoter was <100x, which is ∼ 38% of the induction in the presence of the control mimic ( Figure 3D). Collectively, the results ( Figure 3A-D) demonstrated that the miR-26a mimic blunts TNF-␣ activation of both the IL-6 and NF-B promoters, but not the GAPDH promoter. We conclude that miR-26 decreases IL-6 production through silencing the transcription of IL-6 promoter that is activated by TNF-␣/NF-B signalling. Investigating the effect of miR-26 on TNF-␣/NF-Bresponsive genes at the transcriptome level As NF-B signalling plays a key role in the activation of many cytokine and chemokine genes, our findings (Figure 3A-D) suggest a previously unknown and broad effect of miR-26 on NF-B-responsive genes. Thus, we tested whether miR-26 also down-regulates expression of other genes besides IL-6. We first carried out qRT-PCR analysis using a NF-B signalling target gene array containing 84 NF-B-responsive genes. We readily detected more than 58 genes expressed in BEAS-2B cells in two replicates. At least 31 of these expressed genes (Supplementary Table S1) had a >2-fold induction by TNF-␣ treatment in the presence of a control miRNA mimic. Analysis using the data from both PCR-array experiments and a high-depth RNA sequencing experiment (see below) showed that the induction levels of 17 of the 31 TNF-␣/NF-B-responsive genes in the presence of the miR-26a mimic were appreciably reduced compared with the control values ( Figure 3E). We conclude that miR-26 can down-regulate the activation of many NF-Bresponsive genes in BEAS-2B cells. Many of these genes, The RL reporter mRNA carrying three copies of the predicted miR-26 recognition site from human GW182 (3×26(GW182)) served as a positive control. RL activities derived from the indicated RL reporter mRNAs in BEAS-2B cells in the presence of miR-26a (light blue bars) or a control miRNA mimic (dark blue bars) were detected by dual-luciferase assay. FL activity, derived from the same plasmid carrying the RL reporter gene, was used for normalization. The relative silencing effects were measured by comparing the RL/FL activity detected in cells expressing each reporter mRNA as indicated in the presence of miR-26a mimic with that detected in cells expressing the corresponding reporter mRNA in the presence of the control miRNA mimic (set as 1, represents no silencing effect). (B and C) miR-26a and miR-26b mimics destabilize the MAPK6 mRNA (B), but neither destabilizes IL-6 mRNA (C). BEAS-2B cells transfected with miR-26a (red line), miR-26b (green line) or a control miRNA mimic (blue line) were stimulated with TNF-␣ for 6 h and then treated with Actinomycin D (ActD; 5 g/ml) to block transcription. Cells were harvested immediately (time 0) and after 1, 2, 4 or 6 h of ActD treatment. The levels of endogenous MAPK6 mRNA (B) or IL-6 mRNA (C) were quantified by real-time RT-PCR and normalized to the amount of an internal control, GAPDH mRNA. Half-lives shown in the semi-log plots were obtained by least squares analysis of the percentage of mRNA remaining as a function of time. All data represent the mean ± standard errors (n = 3). T-test was done to assess statistical significance. such as C3, CXCL1, CXCL2, IL-6, CXCL8 (IL-8), SOD2 and MMP9, are directly related to inflammation and tissue remodelling (42,43). To better characterize the wider effects of miR-26 on TNF-␣/NF-B signalling, we performed high-depth (5-7 × 10 7 reads) RNA sequencing (RNA-seq) using RNA samples from TNF-␣-stimulated or non-stimulated BEAS-2B cells that had been transfected with either miR-26a or negative control mimics. We used this transcriptome-based approach to identify additional TNF-␣/NF-B-responsive genes besides IL-6 and the other 16 genes described above ( Figure 3E) whose transcription activation by TNF-␣ is affected either directly or indirectly by changes in miR-26 level. We also performed parallel high-depth RNA-seq using RNA samples from cells treated with a miR-26 inhibitor (antagomir) to deplete endogenous miR-26 or with a control inhibitor. We looked for genes whose activation by TNF-␣ is not only repressed by the miR-26a mimic but also enhanced (de-repressed) by the miR-26a inhibitor. We used as a baseline the response to TNF-␣ treatment (calculated as the ratio of signals with and without TNF-␣) in the presence of a control mimic, compared to the response with a miR-26 mimic (Supplementary Figure S5, left), or in the presence of a control inhibitor, compared to the response with a miR-26 inhibitor (Supplementary Figure S5, right). The genes exhibiting a differential response to the two conditions (i.e. induction by TNF-␣ is repressed by a miR-26 mimic or induction by TNF-␣ is enhanced by a miR-26 inhibitor) are predicted to be miR-26-affected. We modelled the 2-fold changes as a linear relationship and calculated the distance from each gene to the identity line. Using the distribution of the distances for all expressed genes, we calculated z scores to quantify the responses to the miR-26a mimic (Z mimic ) or inhibitor (Z inhibitor ). We then looked for genes whose induction decreased in the presence of a miR-26a mimic (Z mimic < 0) and increased in the presence of a Histogram showing relative changes in luciferase activity following TNF-␣ treatment of BEAS-2B cells expressing a luciferase reporter mRNA driven by human GAPDH promoter (C) or by a minimal promoter containing the NF-B responsive element (D) in the presence of the miR-26a or a control miRNA mimic. Cells were co-transfected with a plasmid expressing a different luciferase as an internal control for luciferase activity quantification. All data represent the mean ± standard errors (n = 3). T-test was done to assess statistical significance. (E) MiR-26 down-regulates many NF-B-responsive genes in BEAS-2B cells. Bar graph showing relative mRNA levels (miR-26a mimic/control mimic) for 31 NF-B-responsive genes in TNF-␣-stimulated BEAS-2B cells. The mRNA expression of these genes was readily detected by RT-qPCR based assay in two replicates using Qiagen human NF-B-responsive gene PCR array, and each gene was induced at least 2-fold by TNF-␣ treatment. These changes were further confirmed in an RNA-Seq experiment (Supplementary Table S1), except for 6 that lacked sufficient read depth. The bars represent the mean induction of the three experiments (except for the ones undetectable by RNA-Seq), and the error bars show the SEM. Gray bars: TNF-␣-induced expression is decreased by miR-26 mimic; black bars: TNF-␣-induced expression is either unaffected or enhanced by miR-26 mimic. miR-26a inhibitor (Z inhibitor > 0) ( Figure 4A). Using this approach, we identified 103 genes ( Figure 4A and Supplementary Table S2) whose induction by TNF-␣ was affected by changes in the level or activity of cellular miR-26 ( Figure 4A and B). To attribute functional categories to these 103 miR-26affected genes, we looked for enriched Gene Ontology terms using GATHER (44). The functional categories most as-sociated with these genes were immune response (FDR = 2×10 −9 ) and inflammatory response (FDR = 1×10 −7 ) (Supplementary Table S3). These functions are consistent with cellular inflammatory response and gene activation involving TNF-␣/NF-B signalling (45). Indeed, the top hit from Ingenuity analysis of the 103 genes was a network that contains the NF-B complex as a major hub (Supplementary Figure S6). Figure S5 for more details). A total of 103 genes (see Supplementary Table S2) in the upper left area with a Z-score (miR-26 mimic treated) < 0 and a Z-score (miR-26 inhibitor treated) > 0 were considered affected and subjected to further bioinformatics analysis. Green dots: up-regulated genes with a Z-score > 0 upon miR-26 inhibitor treatment; red dots: down-regulated genes with a Z-score < 0 upon miR-26 mimic treatment; yellow and orange dots: genes that satisfy both conditions. (B) Boxplot showing relative changes in TNF-␣-mediated induction of the 103 genes identified in panel A when miR-26 levels were increased by miR-26 mimic or reduced by miR-26 inhibitor. The red dotted line marks a 4-fold induction by TNF-␣ treatment. T-test was done to assess statistical significance. To further address the involvement of NF-B signalling in controlling the activation of these 103 genes by TNF-␣, we also used the GATHER application to identify the transcription factor binding sites that are the most common among the 103 genes. Remarkably, the three most significant hits were all matrices for NF-B complex (FDR = 0.03) (Supplementary Table S4). In addition, we used the Ingenuity Pathway Analysis algorithm to look for upstream signalling pathways that might lead to the TNF-␣ activation of the 103 genes. The top three hits were TNF (P-value of overlap = 7.04 × 10 −31 ), NF-B complex (P-value of overlap = 4.12 × 10 −24 ) and RelA (P-value of overlap = 5.28 × 10 −17 ) (Supplementary Table S5). Taken together, the results of these unbiased analyses of the 103 miR-26-affected genes are all consistent with their being activated through NF-B signalling pathway. Identification of NF-B signalling factors targeted by miR-26 The results described above suggest that miR-26 might act through silencing one or more factors required for proper activation of the NF-B signalling pathway by TNF-␣. Therefore, we analysed the RNA-seq datasets to identify NF-B signalling related factors whose mRNA levels were lowered by miR-26 mimics in both TNF-␣-stimulated and non-stimulated cells. We first calculated Z-scores for genes from the differences in their RNA levels in non-stimulated BEAS-2B cells treated with either miR-26a mimic or a con-trol mimic. We identified 949 genes whose expression decreased upon miR-26a mimic treatment (Z-score < −1) (Supplementary Figure S7A). To filter out potential indirect targets, we focused on the transcripts that were also enriched in Ago-CLIP-Seq tags with miR-26 using the 'star-Base' platform (46,47). This analysis yielded a group of 396 potential miR-26a direct targets. To identify miR-26a targets likely to be involved in NF-B signalling, we separately compiled a list of 334 reported or potential TNF-␣/NF-B signalling related factors (23,(48)(49)(50)(51)(52)(53) (Supplementary Table S6) and then selected genes appearing in both this list and the group of potential miR-26a direct targets. This analysis yielded four candidates for NF-B signalling related miR-26 targets: BAG4, HMGA1, MALT1 and MAP3K1. We performed a similar analysis using RNA-seq data obtained from BEAS-2B cells stimulated with TNF-␣, producing a group of 804 transcripts whose mRNA levels are lowered by miR-26a mimics in TNF-␣-activated cells (Zscore < −1) (Supplementary Figure S7B). Of these transcripts, 122 were either marginally or significantly enriched in Ago-CLIP-Seq tags with miR-26. Four of these also appeared on the list of reported or potential TNF-␣/NF-B signalling related factors (Supplementary Table S6) and were thus candidates for NF-B signalling related miR-26 targets: HMGA1, MALT1, PPP2R5E and ZNF462. Collectively, the above analyses of TNF-␣-stimulated and nonstimulated conditions yielded two common genes, HMGA1 and MALT1; both genes have an established role in mediat-ing NF-B signalling (22)(23)(24)54). Therefore, we focused on these two genes in subsequent studies. MiR-26 down-regulates IL-6 and many other NF-Bresponsive genes through silencing HMGA1 and MALT1 To verify that miR-26 can silence HMGA1 and MALT1 expression, we first performed western blot analysis ( Figure 5A) and showed that the miR-26a mimic, but not a control mimic, greatly reduces the level of HMGA1 and moderately diminishes the level of MALT1 in BEAS-2B cells. As mammalian miRNAs silence their direct targets largely through eliciting rapid degradation of their mRNAs (38)(39)(40), we also checked the destabilizing effects of miR-26 on the HMGA1 and MALT1 transcripts. The results ( Figure 5B) showed that the miR-26a mimic reduces the half-life of the HMGA1 mRNA from >12 to ∼6.8 h and of the MALT1 transcript from ∼6.9 to ∼3.7 h. In contrast, little effect on IL-6 mRNA stability was observed ( Figures 2C and 5B). These results not only demonstrate the destabilizing effects of miR-26 on the HMGA1 and MALT1 transcripts but also further substantiate that IL-6 transcript is not a direct target of miR-26. Moreover, knocking down either HMGA1 or MALT1 dramatically reduces the IL-6 mRNA levels in TNF-␣-activated BEAS-2B cells ( Figure 5C and D), indicating that both HMGA1 and MALT1 are required for a full TNF-␣ mediated induction of IL-6 expression in BEAS-2B cells. We then tested whether HMGA1 and MALT1 are involved in the miR-26-mediated repression of the 17 NF-B-responsive genes described in Figure 3E (gray bars). We knocked down both HMGA1 and MALT1 simultaneously to an extent similar to that seen with miR-26a mimics in TNF-␣-activated cells ( Figure 6A), and the levels of IL-6 transcript were reduced to ∼20% of the control levels (Figure 6B). We then evaluated the impact of this knockdown on the 17 miR-26-affected NF-B-responsive genes using NF-B target gene PCR array ( Figure 6C). Fourteen of the 17 NF-B-responsive genes (including IL-6) that were downregulated by miR-26a were also down-regulated to a comparable extent by the knockdown of HMGA1 and MALT1 ( Figure 6C). Taken together, our results indicate that the suppressive effect of miR-26a on many NF-B-responsive genes, including IL-6, is largely attributable to the ability of miR-26 to silence HMGA1 and MALT1. Inverse relationship between levels of miR-26 and of HMGA1 or MALT1 transcripts in lung adenocarcinoma The observation that miR-26 down-regulates NF-B signalling is consistent with the notion that miR-26 levels may be altered in human cancers (2)(3)(4)(5). We thus used the starBase Pan-Cancer Networks platform to analyze clinical mRNA and miRNA expression profiles of 12 cancer types from The Cancer Genome Atlas (TCGA) data portal (46,47). We found that 9 of the 12 cancers exhibit decreased levels of miR-26a (Supplementary Table S7). We then used the starBase platform to perform a Pearson correlation analysis of the miR-26a levels and the mRNA levels of HMGA1 or MALT1 in different cancers. The results (Supplementary Table S8) showed that four of the nine cancer types, including breast cancer, head and neck squamous cell carcinoma, lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), display a significant inverse relationship between miR-26a and HMGA1 mRNA levels. On the other hand, colon and rectal adenocarcinoma and glioblastoma multiform exhibit a marginal inverse relationship between miR-26a and MALT1 mRNA levels (Supplementary Table S9). As our study uses a human bronchial epithelial cell line model, it is worth noting that both lungrelated cancers (LUAD and LUSC) display a significant inverse relationship between miR-26a and HMGA1 mRNA levels ( Figure 7A). We then focused on LUAD and LUSC cancers for patient survival analysis. We first stratified the TCGA datasets of LUAD and LUSC patients according to differential miR-26 expression and performed Kaplan-Meier analysis, focusing on patient groups with high levels of miR-26a relative to those with low levels to measure survival time for each of the two cancers. This analysis did not lead to any significant correlation between miR-26a expression levels and mortality due to LUAD or LUSC (data not shown). However, when we looked for the combined change of miR-26a levels and HMGA1 levels in a reverse manner ( Figure 7B), we discovered that the survival rate of LUAD patients with high miR-26a levels and low HMGA1 mRNA levels (blue line, 50% survival at 53 months) is higher than that of the LUAD patients with low miR-26a levels and high HMGA1 mRNA levels (red line, 50% survival at 20 months). At 120 months post prognosis, the difference between the two groups is even larger, with only 5% survival for tumours with low miR-26a expression and high HMGA1 expression versus 20% survival in the group with high miR-26a and low HMGA1 ( Figure 7B). The overall survival for LUAD patients analysed using miR-26a/MALT1 datasets also showed a similar trend, albeit less significant than that using the miR-26a/HMGA1 datasets ( Figure 7C). It is worth noting that at 120 months post prognosis, the difference became quite marked, with no survival for patients with tumours expressing low miR-26a and high MALT1 expression versus 20% patient survival with tumours expressing high miR-26a and low MALT1 expression ( Figure 7C). Similar analyses of patient survival using the LUSC dataset did not yield significant correlations (data not shown). We also analysed the IL-6 mRNA levels in LUAD patients. IL-6 mRNA levels were slightly higher in the low miR-26a LUAD patients than in the high miR-26a patients (data not shown). However, when we compared IL-6 mRNA levels between patient groups by factoring in the inverse relationship between miR-26a and HMGA1 transcript levels, we observed a significant difference in IL-6 mRNA levels (P = 0.001, Student's t test) ( Figure 7D, left). Likewise, we also observed a difference of IL-6 mRNA levels in the case of miR-26a versus MALT1 patient groups ( Figure 7D, right), albeit somewhat less significant. Collectively, these analyses reveal associations between miR-26aregulated changes in HMGA1, MALT1 and IL-6 mRNA levels and LUAD patient survival. In particular, miR-26a and HMGA1 transcript levels may have prognostic value in cancer patients with lung adenocarcinoma. Figure 3E (gray), in cells transfected with miR-26a mimic or with HMGA1 and MALT1 siRNAs was normalized to mRNA level of the same gene in cells transfected with control miRNA mimic or control siRNA (set as 1). The scattering and average of the mRNA levels of the 17 genes under these different conditions were plotted as a combined bee swarm representation. A dashed line was drawn to assist comparisons of average fold change in each treatment. T-test was done to assess statistical significance. **P < 0.01; ****P < 0.0001. DISCUSSION In this study, we found that miR-26 dampens IL-6 expression by down-regulating the TNF-␣/NF-B signalling axis through silencing two NF-B signalling factors, HMGA1 and MALT1, and not by directly targeting IL-6 mRNA. These findings are at odds with a previous study by another group which, based on some circumstantial observations, suggested that miR-26 silences IL-6 expression by directly targeting a predicted recognition site in IL6 3 UTR for rapid mRNA decay in A549 cells (18). Moreover, our data showed that through down-regulating NF-B signalling, miR-26 dampens the expression of not only IL-6 but also many other NF-B-responsive genes. We further discovered that poor patient prognoses in human lung adenocarcinoma are associated with the combination of low miR-26 levels and high HMGA1, MALT1 or IL-6 levels but not with any of them individually. Several lines of direct evidence indicate that miR-26 does not directly target IL-6 mRNA for gene silencing. First, truncating the seed region of the predicted miR-26 recognition site does not affect the silencing function of IL-6 3 UTR ( Figure 1A; compare IL-6 3 UTR( 26) and IL-6 3 UTR). This finding is consistent with an earlier study which showed that mutating the predicted miR-26 site dose not change translation of IL-6 mRNA in HeLa cells (20). Moreover, a sub-region of IL-6 3 UTR containing the predicted miR-26 was shown to have little destabilization effect on the mRNA in either monkey COS-7 or mouse NIH3T3 cells (19). Together, these observations indicate that the predicted miR-26 recognition site in IL-6 3"UTR is not only dispensable but also nonfunctional for gene silencing. Second, even when miR-26 mimics were introduced into either BEAS-2B or A549 cells, the expression levels of a reporter bearing either the entire IL-6 3 UTR or three copies of Supplementary Figure S4). These findings indicate that the 3' UTR of the IL-6 mRNA does not contain a functional miR-26-responsive site in either BEAS-2B or A549 cells. Third, the 5 UTR and ORF of IL-6 mRNA do not respond to miR-26 mimics either (Figure 2A), indicating that the entire IL-6 transcript does not contain any functional miR-26 recognition site for gene silencing. Fourth, miR-26 mimics can destabilize MAPK6 (Figure 2B), HMGA1 and MALT1 transcripts ( Figure 5B) but cannot destabilize the IL-6 mRNA ( Figures 2C and 5B). Clearly, miR-26 down-regulates IL-6 production through an indirect route rather than by directly targeting the IL-6 transcript for rapid decay or translation repression. Our finding that miR-26 mimics repress the IL-6 promoter activity enough to account for the decrease in the steady-state level of IL-6 mRNA by miR-26 mimics in TNF-␣-activated BEAS-2B cells ( Figure 3A and B) not only further substantiates that miR-26 does not directly target IL-6 mRNA for rapid decay but also unravels the real mechanism underlying miR-26 mediated down-regulation of IL-6 production. Moreover, we demonstrated that miR-26 mimics can decrease TNF-␣-mediated activation of a minimal promoter containing a DNA element responsive to NF-B ( Figure 3D), an essential modulator of transcription of many genes involved in cytokine and chemokine production (including IL-6) and cell survival and proliferation (e.g. (45,48,53,55)). Combining a transcriptome-wide approach (high-depth RNA sequencing and bioinformatics analyses) with manipulation of cellular miR-26 levels (Supplementary Tables S1-S6; The identification of HMGA1 mRNA as a direct target of miR-26 ( Figure 5A, B, and Supplementary Figure S7) is consistent with a previous observation through dual-luciferase analysis, which showed that deletion of a predicted miR-26 recognition site in the 3 UTR of HMGA1 mRNA abolished miR-26-mediated gene silencing of HMGA1 (56). Furthermore, using Ago-CLIP-seq databases, we found that the single miR-26 recognition site in HMGA1 mRNA is highly enriched in Ago-CLIP sequence tags (Supplementary Figure S8). Moreover, our time-course experiment showed that miR-26 mimic significantly destabilizes HMGA1 mRNA ( Figure 5B). All these findings further substantiate that HMGA1 is a direct target of miR-26 and also validate the approach we used to identify NF-B factors targeted by miR-26. Our data also show that miR-26 mimics can down-regulate MALT1 expression by promoting its mRNA decay ( Figure 5B), a prominent mechanism of miRNA-mediated gene silencing (38)(39)(40). Thus far, no predicted miR-26 recognition site has been identified in the 3 UTR of MALT1, and we did not find any site of the MALT1 3 UTR with a significant enrichment of Ago-CLIP sequence tags that could be potentially recognized by miR-26 (data not shown). One possibility is that MALT1 mRNA carries an unconventional miR-26 recognition site that evades the prediction by a known algorithm commonly used. While the present data do not rule out the possibility that MALT1 may be indirectly silenced by miR-26, the expression of both HMGA1 and MALT1 is appreciably dampened by miR-26 ( Figure 5A and B). Moreover, we also show that both HMGA1 and MALT1 are required for a full TNF-␣ mediated induction of IL-6 expression ( Figure 5C and D). Knocking down both HMGA1 and MALT1 had a silencing effect on many NF-B-responsive genes, including IL-6, similar to that caused by miR-26 ( Figure 6). Collectively, our data support the notion that the suppressive effect of miR-26 on many NF-Bresponsive genes is largely attributable to the ability of miR-26 to silence HMGA1 and MALT1. The important issue regarding whether and how miR-26 might be linked to the control of NF-B signalling was never addressed previously, although miR-26a was shown to inhibit cell proliferation and cell motility in bladder cancer through silencing of HMGA1 (56). Our present finding that miR-26a can repress the activation of NF-Bresponsive genes via the ability to silence MALT1 and HMGA1 ( Figures 5 and 6) not only reveals a novel role of miR-26 in down-regulating inflammatory mediator production in bronchial epithelia cells but also provides further mechanistic insight into how miR-26 may accomplish its role by silencing NF-B signalling. HMGA1 is a nonhistone, chromatin-binding protein that is highly expressed during embryogenesis and in some poorly differentiated cancers. Along with NF-B and other promoter-binding transcription factors in the nucleus, HMGA1 is thought to activate inflammation and proliferation related genes (23,24). On the other hand, MALT1 has been shown to regulate NF-B activation in lymphocytes through recruiting and activating the cytoplasmic IB kinase (IKK) complex (22,54), which is involved in propagating the lymphocyte response to inflammation. MALT1 can also activate NF-B in an IKK-independent manner in lymphocyte by cleaving a NF-B inhibitor, RelB, to facilitate DNA-binding by RelA or c-Rel-containing NF-B complexes for transcriptional activation. Thus, our findings ( Figures 5 and 6) support a model ( Figure 8) illustrating how miR-26 can dampen activation of NF-B signalling pathway quite effectively. In this model, up-regulating miR-26 has a two-pronged action, decreasing levels of HMGA1 in the nucleus and MALT1 in the cytoplasm (Figure 8). Another critical point revealed by our study is that evaluation of miR-26 as a silencer for a given cytokine needs to consider the possibility of indirect actions on signalling pathways regulating transcription, as well as direct actions on the transcript for the cytokine. Our finding that poor patient prognoses in human lung adenocarcinoma are associated with the combination of low miR-26 levels and high HMGA1, MALT1 or IL-6 levels but not with any of them individually (Figure 7) also highlights the importance of the interplay among these factors in regulating inflammation and tumourigenicity. Along this line, it is worth noting that IL-6 up-regulation is found to reduce miR-26a expression in hepatocellular carcinoma cells (7), suggesting that a negative feedback loop may exist between IL-6 and miR-26 expression. As inflammation is a major factor contributing to malignancy and IL-6 is a prominent pro-inflammatory mediator (14,57,58), one important implication from our present findings is that miR-26 may have a role in limiting chronic airway inflammatory disease through silencing NF-B signalling. This notion is supported by mouse model studies showing the functional significance of NF-B-driven processes in orchestrating events pertinent to human asthma (59)(60)(61)(62). Also, a recent study (63) reported that broadly induced overexpression of miR-26a in a transgenic mouse model is well tolerated by the animals without any obvious side-effects and, importantly, to not be oncogenic. Our results raise the possibility that miR-26 mimics may be useful in dampening activation of the NF-B pathway in epithelial cells, as part of therapeutic modification of pathological airway inflammation. The present findings also point to the possibility of exploiting the actions of miR-26 to manage chronic inflammation and related malignancies in general.
2016-05-12T22:15:10.714Z
2016-03-28T00:00:00.000
{ "year": 2016, "sha1": "8473ce130f5bc184413e99bba9d3b71da7ccd977", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/44/8/3772/17386988/gkw205.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e596ae6cfa0056c995b0a8ae219eb9357be98337", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
243770690
pes2o/s2orc
v3-fos-license
Fatigue Crack Growth Behavior and Fracture Toughness of EH36 TMCP Steel The fatigue crack growth behavior and fracture toughness of EH36 thermo-mechanical control process (TMCP) steel were investigated by fatigue crack growth rate testing and fracture toughness testing at room temperature. Scanning electron microscopy was used to observe the fracture characteristics of fatigue crack propagation and fracture toughness. The results indicated that the microstructure of EH36 steel is composed of ferrite and pearlite with a small amount of texture. The Paris formula was obtained based on the experimental data, and the value of fracture toughness for EH36 steel was also calculated using the J-integral method. The observations conducted on fatigue fracture surfaces showed that there were a lot of striations, secondary cracks and tearing ridges in the fatigue crack propagation region. Additionally, there existed many dimples on the fracture surfaces of the fracture toughness specimens, which indicated that the crack was propagated through the mechanism of micro-void growth/coalescence. Based on the micromechanical model, the relationship between the micro-fracture surface morphology and the fracture toughness of EH36 steel was established. Introduction In recent years, many strategies have been developed for ocean vessels that have focused on maximizing their scale, minimizing their weight, and ensuring that they operate in an environmentally friendly manner and are capable of deep ocean-going. As a typical high strength and toughness ship plate steel, EH36 is mainly used in the manufacturing of large offshore platforms, large and medium-sized ocean-going ships, such as strength decks, overhead strakes or arc plates, and other key parts of hull. The properties of EH36 ship plate steel are mainly controlled by microalloying and the thermo-mechanical control process (TMCP) [1,2]. Marine ships, especially large ocean-going ships, are often used under the conditions of strong winds and waves that occur during navigation [3,4]. These conditions mean that the hull is often subject to complex stress conditions caused by the huge impact forces and periodic alternating loads that come from different directions [5,6]. In order to ensure the safety and reliability of the ocean vessels, major steel manufacturers are focused only on improving the toughness-and strength-related coupling performance of offshore structural steel, in addition to large-line heat-input welding properties and resistance to corrosion brought about by the ocean environment [7][8][9][10]. However, the fatigue resistance of high-strength offshore structural steel is also an important index that cannot be ignored. For ocean vessels, fatigue cracks are easily initiated in the local area of the hull due to their complex structures and harsh long-term working environments. When the fatigue crack growth rate reaches a certain critical value, damage to the ship structure will occur due to the propagation of fatigue cracks [11][12][13]. Barbosa et al. [14] investigated the growth rates of fatigue cracks in EH36 steel caused by submerged arc welding, and Fracture characteristics also differ according to the amount of energy consumed. In this paper, the fatigue crack growth rate of EH36 steel was measured at room temperature to obtain the corresponding Paris formula, and the fracture toughness value (K J0.2BL ) was calculated using the J-integral method. Furthermore, the corresponding fracture mechanisms were also investigated to provide reliable theoretical support and guidance for the engineering application of ship plate steels. Materials and Methods In the present study, the chemical composition of EH36 high-strength steel is listed in Table 1. EH36 steel is produced by TMCP and the corresponding process parameters are as follows: the slab is heated to 1200 • C and maintained at this temperature for two hours. The initial, second and final rolling temperatures are 1050-1150 • C, 840-880 • C and 810-850 • C, respectively. The optimal reduction ratio of each stage is 60% and the pass down ratio is 12%. After rolling, water cooling is carried out, the final cooling temperature is above 600 • C, and the cooling rate is 3-10 • C/s. The yield strength (R p0.2 ) and ultimate tensile strength (R m ) of EH36 steel at room temperature are 424 MPa and 537 MPa, respectively. The elastic modulus E is 207 GPa, and the Poisson's ratio ν is 0.33. The engineering stress-strain curve is shown in Figure 1. Microstructures of the etched specimen were observed with JSM 6480LV scanning electron microscopy (SEM), including the use of an energy dispersive spectrometer (EDS) and electron back-scattered diffraction (EBSD) techniques. The EBSD specimens were obtained by mechanically grinding and then electrochemical polishing the EH36 steel at a voltage of 30 V and −20 • C. The mixed solution for electrochemical polishing was composed of 5% (volume fraction) perchlorate alcohol solution. As described above, EH36 ship plate steel will be subjected to alternating loads during service, which will cause fatigue damage. The rate of fatigue crack propagation will affect the service life of the EH36 steel. In the process of fracturing, the material deformation, crack initiation and propagation are accompanied by energy consumption. Fracture characteristics also differ according to the amount of energy consumed. In this paper, the fatigue crack growth rate of EH36 steel was measured at room temperature to obtain the corresponding Paris formula, and the fracture toughness value (KJ0.2BL) was calculated using the J-integral method. Furthermore, the corresponding fracture mechanisms were also investigated to provide reliable theoretical support and guidance for the engineering application of ship plate steels. Materials and Methods In the present study, the chemical composition of EH36 high-strength steel is listed in Table 1. EH36 steel is produced by TMCP and the corresponding process parameters are as follows: the slab is heated to 1200 °C and maintained at this temperature for two hours. The initial, second and final rolling temperatures are 1050-1150 °C, 840-880° C and 810-850 °C, respectively. The optimal reduction ratio of each stage is 60% and the pass down ratio is 12%. After rolling, water cooling is carried out, the final cooling temperature is above 600 °C, and the cooling rate is 3-10 °C/s. The yield strength (Rp0.2) and ultimate tensile strength (Rm) of EH36 steel at room temperature are 424 MPa and 537 MPa, respectively. The elastic modulus E is 207 GPa, and the Poisson's ratio ν is 0.33. The engineering stress-strain curve is shown in Figure 1. Microstructures of the etched specimen were observed with JSM 6480LV scanning electron microscopy (SEM), including the use of an energy dispersive spectrometer (EDS) and electron back-scattered diffraction (EBSD) techniques. The EBSD specimens were obtained by mechanically grinding and then electrochemical polishing the EH36 steel at a voltage of 30 V and −20 °C. The mixed solution for electrochemical polishing was composed of 5% (volume fraction) perchlorate alcohol solution. The fatigue crack growth tests were carried out on the Instron 8801 hydraulic fatigue testing machine at ambient temperature and in a laboratory air environment, and the maximum load of the equipment was ±100 kN. The testing standard used was ISO 12108: 2012. During the experiment, a crack opening displacement (COD) gauge was installed on the specimen, and the crack length (a) was measured by the compliance method. The The fatigue crack growth tests were carried out on the Instron 8801 hydraulic fatigue testing machine at ambient temperature and in a laboratory air environment, and the maximum load of the equipment was ±100 kN. The testing standard used was ISO 12108: 2012. During the experiment, a crack opening displacement (COD) gauge was installed on the specimen, and the crack length (a) was measured by the compliance method. The loading method was axial loading, and the stress ratio was 0.1. The test frequency was 20 Hz, and the crack propagation length was about 2 mm. The fatigue crack growth rate test was carried out on three specimens. A compact tension (CT) specimen with a size of 10 mm × 48 mm × 50 mm (B × W × L) was prepared for fatigue crack propagation, Figure 2a. The fracture toughness tests were carried out on the Instron 8802 hydraulic fatigue testing machine, and the maximum load of the equipment was ±250 kN. The testing standard employed was ISO 12135: 2002. Similarly, the fatigue pre-cracking process was carried out at the load ratio of R = 0.1, and the load range ∆P was 16 kN. The test frequency was 10 Hz and the pre-crack length was 3 mm. Then, the specimens were statically loaded to different crack lengths at room temperature, and the displacement rate of static loading test was 1 mm/min. Finally, the specimens were subjected to secondary fatigue until fracturing occurred. The fracture toughness test was carried out for ten specimens. The sizes of the fracture toughness specimens are also shown in Figure 2b. Before the tests, all specimens were prepared by wire cutting and with the use of a grinding machine. The difference was that the notch for the COD was located on the plane where the loading line was, and the specimen contained grooves on both sides. Furthermore, the fatigue fracture surfaces were observed by SEM. Schematic diagrams of the crack growth test and the fracture toughness test are shown in Figure 3. loading method was axial loading, and the stress ratio was 0.1. The test frequency was 20 Hz, and the crack propagation length was about 2 mm. The fatigue crack growth rate test was carried out on three specimens. A compact tension (CT) specimen with a size of 10 mm × 48 mm × 50 mm (B × W × L) was prepared for fatigue crack propagation, as shown in Figure 2a. The fracture toughness tests were carried out on the Instron 8802 hydraulic fatigue testing machine, and the maximum load of the equipment was ±250 kN. The testing standard employed was ISO 12135: 2002. Similarly, the fatigue pre-cracking process was carried out at the load ratio of R = 0.1, and the load range ΔP was 16 kN. The test frequency was 10 Hz and the pre-crack length was 3 mm. Then, the specimens were statically loaded to different crack lengths at room temperature, and the displacement rate of static loading test was 1 mm/min. Finally, the specimens were subjected to secondary fatigue until fracturing occurred. The fracture toughness test was carried out for ten specimens. The sizes of the fracture toughness specimens are also shown in Figure 2b. Before the tests, all specimens were prepared by wire cutting and with the use of a grinding machine. The difference was that the notch for the COD was located on the plane where the loading line was, and the specimen contained grooves on both sides. Furthermore, the fatigue fracture surfaces were observed by SEM. Schematic diagrams of the crack growth test and the fracture toughness test are shown in Figure 3. Figure 4 shows the microstructure of EH36 ship plate steel after deep etching, which involved ferrite and pearlite. In the corresponding SEM micrograph, it can be seen that the ferrite was mainly composed of acicular ferrite (AF) and polygonal ferrite (PF). Most of the AFs were formed in isolation in the surrounding pearlite, which can presumably be attributed to the favorability of the TMCP condition for the transformation of pearlite. Furthermore, a small amount of grain-boundary ferrite GBF can be observed along prior austenite grain boundaries, as shown in Figure 4a. In addition, the pearlite was also distributed at the ferrite grain boundaries. Figure 5 shows the distribution of the added elements, which illustrates that the added elements were evenly dissolved into the matrix. Figure 4 shows the microstructure of EH36 ship plate steel after deep etching, which involved ferrite and pearlite. In the corresponding SEM micrograph, it can be seen that the ferrite was mainly composed of acicular ferrite (AF) and polygonal ferrite (PF). Most of the AFs were formed in isolation in the surrounding pearlite, which can presumably be attributed to the favorability of the TMCP condition for the transformation of pearlite. Furthermore, a small amount of grain-boundary ferrite GBF can be observed along prior austenite grain boundaries, as shown in Figure 4a. In addition, the pearlite was also distributed at the ferrite grain boundaries. Figure 5 shows the distribution of the added elements, which illustrates that the added elements were evenly dissolved into the matrix. Figure 4 shows the microstructure of EH36 ship plate steel after deep etching, whic involved ferrite and pearlite. In the corresponding SEM micrograph, it can be seen tha the ferrite was mainly composed of acicular ferrite (AF) and polygonal ferrite (PF). Mos of the AFs were formed in isolation in the surrounding pearlite, which can presumably b attributed to the favorability of the TMCP condition for the transformation of pearlite Furthermore, a small amount of grain-boundary ferrite GBF can be observed along prio austenite grain boundaries, as shown in Figure 4a. In addition, the pearlite was also dis tributed at the ferrite grain boundaries. Figure 5 shows the distribution of the adde elements, which illustrates that the added elements were evenly dissolved into the ma trix. In order to more clearly examine the microstructural features of the material, EBSD analysis was implemented, and the results are exhibited in Figure 6. The color of each grain is coded by its crystal orientation based on the [001] inverse-pole figure (IPF). The EBSD map in Figure 4 shows that the morphologies of the ferrite grains were fine and granular, and they had an average grain size of about 7.3 µm. Taking into account the φ2 value of 45° (φ1, φ = 0-90°), in addition to some important orientations in Euler space, it can be concluded that there were {112}<110> and {111}<112> textures, but that the maximum densities were not strong. According to the analysis, the material had almost no residual austenite. In order to more clearly examine the microstructural features of the material, EBSD analysis was implemented, and the results are exhibited in Figure 6. The color of each grain is coded by its crystal orientation based on the [001] inverse-pole figure (IPF). The EBSD map in Figure 4 shows that the morphologies of the ferrite grains were fine and granular, and they had an average grain size of about 7.3 µm. Taking into account the ϕ2 value of 45 • (ϕ1, ϕ = 0-90 • ), in addition to some important orientations in Euler space, it can be concluded that there were {112}<110> and {111}<112> textures, but that the maximum densities were not strong. According to the analysis, the material had almost no residual austenite. In order to more clearly examine the microstructural features of the material, EBSD analysis was implemented, and the results are exhibited in Figure 6. The color of each grain is coded by its crystal orientation based on the [001] inverse-pole figure (IPF). The EBSD map in Figure 4 shows that the morphologies of the ferrite grains were fine and granular, and they had an average grain size of about 7.3 µm. Taking into account the φ2 value of 45° (φ1, φ = 0-90°), in addition to some important orientations in Euler space, it can be concluded that there were {112}<110> and {111}<112> textures, but that the maximum densities were not strong. According to the analysis, the material had almost no residual austenite. Fatigue Properties and Fracture Toughness During the fatigue crack growth tests, fatigue cracks in the plastic zone were first prefabricated on the specimen by the method of step-by-step load reduction. Constant load control was adopted, and the load-reduction amplitude of the adjacent level load did not exceed 20%. Then, keeping the load constant until the specimen broke, the da/dN and ∆K values were obtained, and the data points that did not satisfy W − a > 4/(K max /R p0.2 ) 2 were removed to obtain the double logarithmic curve of da/dN − ∆K. Finally, the Paris formula was obtained through double logarithmic linear fitting. For EH36 ship plate steel, the increases in the crack length with the number of cycles (N), and the fatigue crack growth curves in terms of the Paris region obtained from the experimental record datum, are shown in Figure 7. From the variation of crack length, it can be seen intuitively that with the increasing of the number of cycles, the growth in the crack length became faster and faster. The crack-propagation curves between the crack length and the number of cycles were approximately in the form of an exponential curve, which indicates that the crack growth rate increased. and ΔK values were obtained, and the data points that did not satisfy W-a > 4/(Kmax/Rp0.2) 2 were removed to obtain the double logarithmic curve of da/dN-ΔK. Finally, the Paris formula was obtained through double logarithmic linear fitting. For EH36 ship plate steel, the increases in the crack length with the number of cycles (N), and the fatigue crack growth curves in terms of the Paris region obtained from the experimental record datum, are shown in Figure 7. From the variation of crack length, it can be seen intuitively that with the increasing of the number of cycles, the growth in the crack length became faster and faster. The crack-propagation curves between the crack length and the number of cycles were approximately in the form of an exponential curve, which indicates that the crack growth rate increased. In double logarithmic coordinates, the relationship curves between da/dN and ΔK are shown in Figure 8. It can be seen that there was a linear relationship between them, which satisfied the Paris formula given below: where C and m are the parameters related to the material. According to the Paris formula, larger ΔK values can lead to larger crack growth rates, which causes the crack growth rate to present an n power exponential form with the propagation of the crack. According to the datum, the individual and average Paris formulas for three samples and the corresponding values of C and m can be obtained using the least square linear fitting method, as shown the solid red line in Figure 8. The corresponding values of C and m are also listed in Figure 8. In double logarithmic coordinates, the relationship curves between da/dN and ∆K are shown in Figure 8. It can be seen that there was a linear relationship between them, which satisfied the Paris formula given below: where C and m are the parameters related to the material. According to the Paris formula, larger ∆K values can lead to larger crack growth rates, which causes the crack growth rate to present an n power exponential form with the propagation of the crack. According to the datum, the individual and average Paris formulas for three samples and the corresponding values of C and m can be obtained using the least square linear fitting method, as shown the solid red line in Figure 8. The corresponding values of C and m are also listed in Figure 8. In this paper, the fracture toughness of EH36 steel was measured using the J-integral (J) versus crack propagation quantity (Δa) resistance curve at room temperature. During the fracture toughness tests, different specimens were statically loaded to different crack growth lengths, and then unloaded. Meanwhile, the load and displacement data of the loading line were collected and recorded. In order to accurately measure the length of crack growth, all specimens needed to be thermally colored. The heating temperature was 350 °C, and the heating time was 30 min. Then, the fatigue crack length was measured and the corresponding J-integral value was calculated according to the relevant formula. The mathematical expression is shown as follows: where α, β and γ are the parameters related to the material. According to the fatigue crack fracture, nine crack lengths of the prefabricated crack front and nine crack lengths of the crack propagation front were measured using an optical microscope. Using the average length of the prefabricated cracks (a0) and of the propagation cracks (a), the value of Δa was calculated using Equation Furthermore, the value of J was calculated using the loading and unloading curves. The loading and unloading curves of different fracture toughness specimens are shown in Figure 9. In this paper, the fracture toughness of EH36 steel was measured using the J-integral (J) versus crack propagation quantity (∆a) resistance curve at room temperature. During the fracture toughness tests, different specimens were statically loaded to different crack growth lengths, and then unloaded. Meanwhile, the load and displacement data of the loading line were collected and recorded. In order to accurately measure the length of crack growth, all specimens needed to be thermally colored. The heating temperature was 350 • C, and the heating time was 30 min. Then, the fatigue crack length was measured and the corresponding J-integral value was calculated according to the relevant formula. The mathematical expression is shown as follows: where α, β and γ are the parameters related to the material. According to the fatigue crack fracture, nine crack lengths of the prefabricated crack front and nine crack lengths of the crack propagation front were measured using an optical microscope. Using the average length of the prefabricated cracks (a 0 ) and of the propagation cracks (a), the value of ∆a was calculated using Equation (3). Furthermore, the value of J was calculated using the loading and unloading curves. The loading and unloading curves of different fracture toughness specimens are shown in Figure 9. Using the Δa as the abscissa and J as the ordinate, the J-Δa diagram is shown in Figure 10 Using the ∆a as the abscissa and J as the ordinate, the J-∆a diagram is shown in Figure 10. There is a passivation line in the figure, as shown by tilted solid black line, the expression of which is J = 3.75R m ∆a = 2014∆a. Drawing a parallel line of passivation line through (0.1, 0) and taking the region to the right of parallel line as the effective region, all data points within the effective region were fitted according to Equation (2). A new J-∆a resistance curve could be obtained, as shown by the solid red line in the figure. Then, Equation (2) could be rewritten as J = 1085∆a 0.376 (α = 0, β = 1085, γ = 0.376). At this point, the parallel line of the passivation line was drawn through at the point (0.2, 0) to obtain the y-coordinate of the intersection point of the parallel line and the J-∆a resistance curve, which represented the fracture toughness of the EH36 steel. The value of J 0.2BL was 926 kJ/m 2 . K J0.2BL could be calculated using Equation (4) as follows: Figure 10. Fracture toughness J-∆a resistance curve of EH36 steel. Morphological Features of Fracture Surface In order to analyze the fatigue fracture mechanisms of the EH36 steel, the fat fracture and crack morphology were observed and characterized at the micro level. A well known, the fatigue fractography consists of several different zones, including initiation, propagation and final fracture of the fatigue crack. Figure 11 displays fracture morphology of the propagation region for three fatigue crack propaga specimens at different magnifications. The direction of fatigue crack propagation i dicated by a long white arrow. Figure 11b,d,f are the enlarged views of the white bo gion in Figure 11a,c,e. As can be seen from the diagram, the typical characteristic striations could be observed in the crack-expanding region. It is theoretically pos that every fatigue cycle would produce a fatigue striation in the Paris region. The ave striation distance can reflect the value of the crack growth rate. That is to say, the rower the fatigue striation distance, the slower the growth of the fatigue crack, w indicates a better resistance to fatigue propagation [31,32]. In addition to fatigue s tions, secondary cracks and tearing ridges could also be observed in this zone. Thi dicated that there was a large stress concentration at the fatigue crack tip, and tha crack propagation was prone to deflection under normal stress, resulting in a relati rough fatigue crack propagation zone. Extensive studies have shown that the fat limit of ordinary carbon steel will increase with the decreasing of the subgrain size Heat treatment can effectively change the microstructure of eutectoid steel, that is, it change the spacing of the thin ferrite and pearlite sections in the steel, and thus, cha the fatigue property of the material. Furthermore, the presence of partial {112}<110> {111}<112> textures in the microstructure of EH36 steel will lead to the anisotrop polycrystalline alloy, which will, to a certain extent, influence the deformation beha [34]. At the fatigue crack tip, due to the large stress concentration, the stress near crack tip will exceed the yield stress of the material and plastic deformation will oc influencing the size of the plastic zone. The size and distribution of the plastic zone affect the fatigue crack propagation behavior of the material. The existence of the tex and the plastic zone may cause the crack to deflect at the micro scale and form a sec ary crack, which will reduce the driving force of the crack tip growth, indirectly hin ing the forward propagation of the crack, and this is beneficial in terms of improving fatigue resistance of the material. Finally, the value of K J0.2BL was 464 MPa·m 1/2 . Morphological Features of Fracture Surface In order to analyze the fatigue fracture mechanisms of the EH36 steel, the fatigue fracture and crack morphology were observed and characterized at the micro level. As is well known, the fatigue fractography consists of several different zones, including the initiation, propagation and final fracture of the fatigue crack. Figure 11 displays the fracture morphology of the propagation region for three fatigue crack propagation specimens at different magnifications. The direction of fatigue crack propagation is indicated by a long white arrow. Figure 11b,d,f are the enlarged views of the white box region in Figure 11a,c,e. As can be seen from the diagram, the typical characteristics of striations could be observed in the crack-expanding region. It is theoretically possible that every fatigue cycle would produce a fatigue striation in the Paris region. The average striation distance can reflect the value of the crack growth rate. That is to say, the narrower the fatigue striation distance, the slower the growth of the fatigue crack, which indicates a better resistance to fatigue propagation [31,32]. In addition to fatigue striations, secondary cracks and tearing ridges could also be observed in this zone. This indicated that there was a large stress concentration at the fatigue crack tip, and that the crack propagation was prone to deflection under normal stress, resulting in a relatively rough fatigue crack propagation zone. Extensive studies have shown that the fatigue limit of ordinary carbon steel will increase with the decreasing of the subgrain size [33]. Heat treatment can effectively change the microstructure of eutectoid steel, that is, it can change the spacing of the thin ferrite and pearlite sections in the steel, and thus, change the fatigue property of the material. Furthermore, the presence of partial {112}<110> and {111}<112> textures in the microstructure of EH36 steel will lead to the anisotropy of polycrystalline alloy, which will, to a certain extent, influence the deformation behavior [34]. At the fatigue crack tip, due to the large stress concentration, the stress near the crack tip will exceed the yield stress of the material and plastic deformation will occur, influencing the size of the plastic zone. The size and distribution of the plastic zone will affect the fatigue crack propagation behavior of the material. The existence of the texture and the plastic zone may cause the crack to deflect at the micro scale and form a secondary crack, which will reduce the driving force of the crack tip growth, indirectly hindering the forward propagation of the crack, and this is beneficial in terms of improving the fatigue resistance of the material. As an important part of fracture toughness research, the observation of fractures is essential due to the fracture surface preserving traces of the damage to the specimen that occurs during loading, which will help in understanding the fracture properties and deformation behavior of the material. The fracture morphologies of ductile fatigue toughness specimens were observed at the macro and the micro level, and the results are shown in Figure 12. Since the specimens were colored after loading and unloading in the fracture toughness test, the regions corresponding to different stages could be clearly distinguished from the crack surfaces, as shown in Figure 12a,b. It can be seen that the boundary lines of several regions on the fracture surface were basically parallel. From right to left, the fracture surface presents the fatigue pre-crack zone, the static loading process zone, the second fatigue zone and the final instability fracture zone, as shown by the red horizontal arrow in Figure 12a. The final fracture zone and rapid crack growth As an important part of fracture toughness research, the observation of fractures is essential due to the fracture surface preserving traces of the damage to the specimen that occurs during loading, which will help in understanding the fracture properties and deformation behavior of the material. The fracture morphologies of ductile fatigue toughness specimens were observed at the macro and the micro level, and the results are shown in Figure 12. Since the specimens were colored after loading and unloading in the fracture toughness test, the regions corresponding to different stages could be clearly distinguished from the crack surfaces, as shown in Figure 12a,b. It can be seen that the boundary lines of several regions on the fracture surface were basically parallel. From right to left, the fracture surface presents the fatigue pre-crack zone, the static loading process zone, the second fatigue zone and the final instability fracture zone, as shown by the red horizontal arrow in Figure 12a. The final fracture zone and rapid crack growth zone were relatively rough, but the final instability fracture zone exhibited the phenomenon of necking. The transition stage from the second fatigue region to the final instability fracture region is marked as I. The edge between the static loading process zone and the second fatigue zone is flagged as II. The partial amplification of the red box area in Figure 12a offers a clearer representation of the four types of regions and edges, as shown in Figure 12b. The fractographs of the secondary fatigue zone are shown in Figure 12c,d, and Figure 12f,g show the local enlarged view of Figure 12c,d, respectively. Similarly to the crack propagation specimens, there are fatigue striations, tearing ridges and secondary cracks in the secondary fatigue zone. At the initial stage of this region, due to the relatively low stress intensity factor, the size of plastic zone at the crack tip was small, resulting in the striations being relatively shallower and the crack growth zone being relatively flat. With the increase in the number of cycles, the stress intensity factor increased and the crack propagation accelerated, leading to clearer fatigue striations and an increase in secondary cracks. Combined with the fracture morphology of both tests, it can be inferred that the fracture surfaces of EH36 steel exhibit a typical transcrystalline fracture type. Materials 2021, 14, x FOR PEER REVIEW 12 of 18 zone were relatively rough, but the final instability fracture zone exhibited the phenomenon of necking. The transition stage from the second fatigue region to the final instability fracture region is marked as Ⅰ. The edge between the static loading process zone and the second fatigue zone is flagged as Ⅱ. The partial amplification of the red box area in Figure 12a offers a clearer representation of the four types of regions and edges, as shown in Figure 12b. The fractographs of the secondary fatigue zone are shown in Figure 12c,d, and Figure 12f,g show the local enlarged view of Figure 12c,d, respectively. Similarly to the crack propagation specimens, there are fatigue striations, tearing ridges and secondary cracks in the secondary fatigue zone. At the initial stage of this region, due to the relatively low stress intensity factor, the size of plastic zone at the crack tip was small, resulting in the striations being relatively shallower and the crack growth zone being relatively flat. With the increase in the number of cycles, the stress intensity factor increased and the crack propagation accelerated, leading to clearer fatigue striations and an increase in secondary cracks. Combined with the fracture morphology of both tests, it can be inferred that the fracture surfaces of EH36 steel exhibit a typical transcrystalline fracture type. In addition to the typical fatigue fracture characteristics, there were also a lot of ductile fracture characteristics in the static loading process zone on the fracture surface. For instance, many dimples of different sizes could be observed, as shown in Figure 12e,h. As is widely known, the growth/coalescence and the cleavage model of micro-voids are two main micro fracture mechanisms that occur during the crack propagation of materials [35,36]. Stress concentration occurs at the crack tip due to the occurrence of plastic deformation during loading. If the local stress at the crack tip first reaches the cleavage fracture strength of the grain, the main crack will propagate forward through the cleavage. At this time, the fracture surface is relatively smooth. On the contrary, if the In addition to the typical fatigue fracture characteristics, there were also a lot of ductile fracture characteristics in the static loading process zone on the fracture surface. For instance, many dimples of different sizes could be observed, as shown in Figure 12e,h. As is widely known, the growth/coalescence and the cleavage model of micro-voids are two main micro fracture mechanisms that occur during the crack propagation of materials [35,36]. Stress concentration occurs at the crack tip due to the occurrence of plastic deformation during loading. If the local stress at the crack tip first reaches the cleavage fracture strength of the grain, the main crack will propagate forward through the cleavage. At this time, the fracture surface is relatively smooth. On the contrary, if the local stress at the crack tip does not reach the cleavage fracture strength of the grain, a large amount of dislocation pile-up will be generated near the crack tip due to stress concentration, and the micro-voids will nucleate in dislocation pile-up mode. Then, under the action of plastic strain, the micro-voids are promoted to grow via a dislocation movement around the micro-voids, polymerize with other micro-voids, and finally form dimples [37]. At this time, the main crack is connected to the dimple in front of it to form a new main crack, thus realizing the crack propagation behavior. The above two microscopic mechanisms of crack propagation generally exist simultaneously on the surface of fatigue fractures depending on the competitive relationship between stress and strain at the crack tip. Therefore, different materials will have different fracture morphologies during crack propagation, and the EH36 steel studied in this paper formed dimples with different sizes under static loading, which is a typical micro-void growth/coalescence mechanism. Analysis of Fracture Morphology and Fracture Toughness Different fracture propagation mechanisms indicated that the propagation resistance and energy consumption were not consistent during fatigue crack propagation, and the corresponding fracture toughness was also different. In microstructures, this will lead to the formation of different fracture morphologies. Therefore, the fracture toughness could be characterized by the microscopic fracture morphologies of the material. Based on the micromechanical model [38], the commonly used quantitative relationship between fracture toughness (K C ) and dimple size is as follows: where h is the dimple height. To facilitate measurement, the diameter (d) can be used instead of the height of dimples for calculation. However, for dimples of different sizes, the ratio of height to diameter is often inconsistent. Therefore, based on the previous studies and the fracture morphology of EH36 steel, the dimples were divided into large dimples and small dimples. For large dimples (d > 10 µm), the height-diameter ratio could be considered to be about 1, that is, d l = h l , where h l and d l are the height and diameter of the large dimples, respectively. For the small dimples (d < 10 µm), the height-diameter ratio detected and verified by FIB technology was about 0.5, that is, d s = 2h s , where h s , d s are the height and diameter of the small dimples, respectively. The dimple-free region can be considered as a cleavage plane, and its formation energy is far less than that of dimples. Accordingly, it could be considered as a special dimple with the height of 0. The fracture toughness of material could be derived from the volume fraction of dimples of different sizes, and Equation (5) could be converted to: where S l and S s are the area fraction of large and small dimples, respectively. D = 11 is a coefficient related to the width to thickness ratio of specimen. In this way, the quantitative relationship between the micro-fracture characteristics and fracture toughness of EH36 steel was obtained. According to the fracture morphology in Figure 12, the diameter and number of dimples were counted by SEM, and then the percentages of area occupied by dimples with different sizes were analyzed, as shown in Figure 13. Although the average diameter of the dimples was 11.12 µm, due to their relatively small size, small dimples accounted for only 16.69% of total area. The fracture toughness corresponding to all dimples could be calculated using Equation (6). As the number of statistical dimples increased, the value of fracture toughness gradually increased, as shown in the red curve in Figure 13. The highest point of the curve was the fracture toughness of EH36 steel calculated using dimples. As mentioned above, the J-integral is also a method that is used to calculate fracture toughness from the perspective of energy. The value of K J0.2BL is also shown in the dashed line in Figure 11; it can be seen that K J0.2BL was slightly higher than K C . In general, for highly ductile materials, the fracture toughness of the material is the sum of the energy required for plastic deformation, micro-crack initiation/propagation and microvoid growth/ coalescence [39]. This small difference may have been caused by the fact that when calculating the value of K C using Equation (6), only the energy absorption under the ductile fracture formed by plastic deformation was considered, while the energy consumed to form the local tiny brittle cleavage plane was ignored. Furthermore, Frómeta [40] also gave similar results, believing that the length of the crack at the initial stage of crack propagation and at the later stage of crack propagation would have a certain influence on the calculated value, causing the value calculated using the dimples in the static loading process zone at the front of crack to be lower than that calculated using the J-integral. lated using dimples. As mentioned above, the J-integral is also a method that is used to calculate fracture toughness from the perspective of energy. The value of KJ0.2BL is also shown in the dashed line in Figure 11; it can be seen that KJ0.2BL was slightly higher than KC. In general, for highly ductile materials, the fracture toughness of the material is the sum of the energy required for plastic deformation, micro-crack initiation/propagation and micro-void growth/ coalescence [39]. This small difference may have been caused by the fact that when calculating the value of KC using Equation (6), only the energy absorption under the ductile fracture formed by plastic deformation was considered, while the energy consumed to form the local tiny brittle cleavage plane was ignored. Furthermore, Frómeta [40] also gave similar results, believing that the length of the crack at the initial stage of crack propagation and at the later stage of crack propagation would have a certain influence on the calculated value, causing the value calculated using the dimples in the static loading process zone at the front of crack to be lower than that calculated using the J-integral. Through the above analysis, we linked fracture toughness with fracture surface morphology, and fracture morphology was attributed to different fracture mechanisms From the perspective of energy, there are differences in energy consumption among different fracture mechanisms. It can be considered that the greater the amount of energy consumed during fracture, the larger the fracture toughness of the material. The relationship between the fracture mechanism, the micro-fracture surface morphology and the fracture toughness in the material is shown in Figure 14. For EH36 steel, under the condition of pre-cracking, the more numerous and larger the dimples on the fracture surface-representing the characteristics of ductile fracture-are, the higher the value of fracture toughness (KC) will be. Furthermore, compared with small dimples, the formation of large dimples will consume more energy, thus improving the fracture toughness of EH36 steel. In this paper, by analyzing the fracture mechanism of EH36 steel, the influence of the cleavage plane, small dimples and large dimples on the fracture toughness is discussed, which can provide a reasonable theoretical basis for safe application in engineering. Through the above analysis, we linked fracture toughness with fracture surface morphology, and fracture morphology was attributed to different fracture mechanisms. From the perspective of energy, there are differences in energy consumption among different fracture mechanisms. It can be considered that the greater the amount of energy consumed during fracture, the larger the fracture toughness of the material. The relationship between the fracture mechanism, the micro-fracture surface morphology and the fracture toughness in the material is shown in Figure 14. For EH36 steel, under the condition of pre-cracking, the more numerous and larger the dimples on the fracture surface-representing the characteristics of ductile fracture-are, the higher the value of fracture toughness (K C ) will be. Furthermore, compared with small dimples, the formation of large dimples will consume more energy, thus improving the fracture toughness of EH36 steel. In this paper, by analyzing the fracture mechanism of EH36 steel, the influence of the cleavage plane, small dimples and large dimples on the fracture toughness is discussed, which can provide a reasonable theoretical basis for safe application in engineering. Figure 14. Relationship between fracture mechanism, micro-fracture surface morphology and fracture toughness. Conclusions The parameters of crack propagation were obtained and the value of KJ0.2BL was calculated. The crack propagation behavior and fatigue fracture mechanism of EH36 steel were investigated at room temperature. The corresponding conclusions can be summarized as follows: (1) The microstructure of EH36 steel was composed of fine ferrite and pearlite, with an average grain size of about 7.3 µm. The Paris formula was obtained by means of linear fitting, and the corresponding average values of C and m were equal to 1.975 × 10 −9 and 3.327, respectively. The fracture toughness of the EH36 steel was also calculated using the J-integral method, and the value of KJ0.2BL was 464 MPa·m 1/2 . (2) The fatigue fracture surfaces of the EH36 steel exhibited typical transcrystalline fracture characteristics, accompanied by fatigue striations, secondary cracks and tearing ridges. There were many dimples with different sizes in the static loading process zone of the fracture toughness tests, which indicated that EH36 steel has good toughness and presents the characteristics of ductile fracture under static loading conditions. (3) Based on the energy consumption analysis of the fracture morphology, the relationship between the micro-fracture surface morphology and the fracture toughness of EH36 steel was established, and the fracture toughness obtained was close to that calculated using the J-integral method. Conclusions The parameters of crack propagation were obtained and the value of K J0.2BL was calculated. The crack propagation behavior and fatigue fracture mechanism of EH36 steel were investigated at room temperature. The corresponding conclusions can be summarized as follows: (1) The microstructure of EH36 steel was composed of fine ferrite and pearlite, with an average grain size of about 7.3 µm. The Paris formula was obtained by means of linear fitting, and the corresponding average values of C and m were equal to 1.975 × 10 −9 and 3.327, respectively. The fracture toughness of the EH36 steel was also calculated using the J-integral method, and the value of K J0.2BL was 464 MPa·m 1/2 . (2) The fatigue fracture surfaces of the EH36 steel exhibited typical transcrystalline fracture characteristics, accompanied by fatigue striations, secondary cracks and tearing ridges. There were many dimples with different sizes in the static loading process zone of the fracture toughness tests, which indicated that EH36 steel has good toughness and presents the characteristics of ductile fracture under static loading conditions. (3) Based on the energy consumption analysis of the fracture morphology, the relationship between the micro-fracture surface morphology and the fracture toughness of EH36 steel was established, and the fracture toughness obtained was close to that calculated using the J-integral method.
2021-11-06T15:17:31.638Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "7491536ed42254fe423e5b5590782bf55d1d3443", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/21/6621/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "830a6d5c6cb2ee6c3a18fca774e716d6de8c6f76", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
237470983
pes2o/s2orc
v3-fos-license
Predicting Duration of Mechanical Ventilation in Acute Respiratory Distress Syndrome Using Supervised Machine Learning Abstract Background: Acute respiratory distress syndrome (ARDS) is an intense inflammatory process of the lungs. Most ARDS patients require mechanical ventilation (MV). Few studies have investigated the prediction of MV duration over time. We aimed at characterizing the best early scenario during the first two days in the intensive care unit (ICU) to predict MV duration after ARDS onset using supervised machine learning (ML) approaches. Methods: For model description, we extracted data from the first 3 ICU days after ARDS diagnosis from patients included in the publicly available MIMIC-III database. Disease progression was tracked along those 3 ICU days to assess lung severity according to Berlin criteria. Three robust supervised ML techniques were implemented using Python 3.7 (Light Gradient Boosting Machine (LightGBM); Random Forest (RF); and eXtreme Gradient Boosting (XGBoost)) for predicting MV duration. For external validation, we used the publicly available multicenter database eICU. Results: A total of 2466 and 5153 patients in MIMIC-III and eICU databases, respectively, received MV for >48 h. Median MV duration of extracted patients was 6.5 days (IQR 4.4–9.8 days) in MIMIC-III and 5.0 days (IQR 3.0–9.0 days) in eICU. LightGBM was the best model in predicting MV duration after ARDS onset in MIMIC-III with a root mean square error (RMSE) of 6.10–6.41 days, and it was externally validated in eICU with RMSE of 5.87–6.08 days. The best early prediction model was obtained with data captured in the 2nd day. Conclusions: Supervised ML can make early and accurate predictions of MV duration in ARDS after onset over time across ICUs. Supervised ML models might have important implications for optimizing ICU resource utilization and high acute cost reduction of MV. Background The acute respiratory distress syndrome (ARDS) is an important cause of morbidity, mortality, and costs in intensive care units (ICUs) worldwide [1]. It is a life-threatening form of acute respiratory failure characterized by inflammatory pulmonary edema leading to severe hypoxemia, requiring endotracheal intubation and mechanical ventilation (MV) in most cases [2]. The number of days on MV during the ICU stay is a major driver of high acute care costs [3][4][5]. We believe that an important intervention to mitigate these costs is timely recognition and treatment of conditions that can cause serious complications. The Berlin definition of ARDS identifies three mutually exclusive categories of lung severity with PaO 2 /FiO 2 ratios in the ranges >200-300 mmHg (mild ARDS), >100-200 mmHg 2 of 9 (moderate ARDS), and ≤100 mmHg (severe ARDS) [6,7]. Some studies [8,9] have reported a progression of costs from mild, to moderate, to severe ARDS. Despite global acceptance of the Berlin criteria [10], some authors have questioned its ability to assess the "true" severity of lung injury [11]. A recent study argues that mild ARDS should be considered "severe in terms of level of care" [12]. This quality criterion (i.e., level of care) could be measured in terms of MV duration, but accurate predictions of MV duration are difficult for critical care physicians [13,14], particularly for patients requiring prolonged MV [14]. Predicting MV duration could influence important clinical decisions, such as timing of tracheostomy and initiation of oral nutrition [14]. In this context, one approach for an accurate prediction of MV duration is the use of artificial intelligence (AI) approaches, such as machine learning (ML). ML is a subset of AI in which machines extract knowledge from the data provided. ML is an exploratory process where there is no one-methodfits-all solution [15,16]. ML merges statistical analysis techniques with computer science to produce algorithms capable of "statistical learning" [17]. ML algorithms are divided into two categories: supervised and unsupervised [17]. Supervised learning algorithms, the ones used in our study, detect relationships between potential explanatory features and a known target outcome [16]. They are commonly used in ICUs to predict clinical outcomes [16][17][18][19][20][21]. Troché and Moine addressed the critical question on whether MV duration is predictable [22]. Herein, we present the use of three powerful supervised ML methods to develop novel models to predict MV duration in ARDS after onset over time, using the single-center MIMIC-III dataset under three different scenarios. Then, the eICU multicenter dataset was used to externally validate the best MIMIC-III prediction model. Study Design and Patient Population We used two publicly available clinical databases for development and external validation of the best ML predictive model: MIMIC-III [23] and eICU, respectively [24]. Data of the first 3 ICU days (day 1 for representative data within the first 24 h after ARDS onset, day 2 for data within 24-48 h after onset, and day 3 for data within 48-72 h after onset) (n = 2466, 1445, and 1278 patients, respectively) were extracted from the single-center dataset MIMIC-III (MetaVision, 2008-2012) [23]. Similarly, data of the first 3 ICU days after ARDS onset (n = 5153, 2981, and 2326 patients, respectively) were extracted from the multicenter dataset eICU (2014-2015) [24]. Patients <18 years were excluded. Data extraction from both datasets was performed using Python 3.7. The selection of clinical variables was based on prior studies [9,19,[25][26][27]. All extracted patients from both datasets fulfilled the Berlin definition for ARDS [6]. For the purpose of this study, prolonged MV was defined as being ventilated for >48 h [22,28]. Disease progression in each dataset was tracked along those 3 ICU days. MIMIC-III Medical Information Mart for Intensive Care III (MIMIC-III) is a large single-center database containing de-identified health-related data of about 60,000 ICUs patients admitted to the Beth Israel Deaconess Medical Center (Boston, MA, USA) between 2001 and 2012 [23]. There were six predictors: baseline demographic information (age); ventilator parameters including PEEP; blood gas parameters including FiO 2 , PaO 2 , PaO 2 /FiO 2 , and PaCO 2 . The main target variable was MV duration. eICU eICU is a multicenter ICU database and it has a high granularity of data of more than 200,000 ICU admissions [24]. We used this database for external validation of the best prediction model obtained from MIMIC-III in order to obtain the MV duration prediction in the eICU database. Predictive Models During the first 24 h of ARDS onset, misdiagnosis can occur if clinicians consider qualifying PaO 2 values resulting from acute events unrelated to the disease process (such as endotracheal tube obstruction, barotrauma, or hemodynamic instability), instead of considering only PaO 2 values while patients are clinically stable. It is also well established that changes in PEEP and FiO 2 within the first few hours of routine intensive care management alter the PaO 2 /FiO 2 ratio in ARDS patients [11]. Since in a substantial proportion of patients diagnosed as having ARDS did not meet ARDS criteria within the first 24 h of care, we decided to examine supervised ML models in the following three scenarios during the first two ICU days: (i) scenario I: predicting MV duration using information captured in the 1st ICU day; (ii) scenario II: predicting MV duration using information captured in the 2nd ICU day; (iii) scenario III: predicting MV duration using information captured in the 1st and 2nd ICU days, then comparing these three scenarios with scenario IV for predicting MV duration using the information captured in the 3rd ICU day exclusively. We implemented three robust supervised ML algorithms via Python 3.7, including Light Gradient Boosting Machine (LightGBM) [29], Random Forest (RF) [30], and eXtreme Gradient Boosting (XGBoost) [31] to generate predictive models for MV duration after ARDS onset over time in the development database. For external validation purposes, we used the multicenter eICU dataset, as these three methods sacrifice the explicitness of the model in favor of predictive quality, and the generated models should be seen as "black box" with a high predictive robustness. For the development database, we optimized each model's parameters through a grid search over the respective model's hyperparameter space and the quality of all prediction models was computed based on a 10-fold crossvalidation approach, which means that the dataset was divided into 10 folds, and in each run, 9 were used for training, and the remaining 1 was used for testing. Root-mean-square error (RMSE) was used to assess the predictive quality of the models. RMSE flags more significant differences between the predicted and the actual patient readings when they occur [32]. MV duration was expressed in days. Results For development and validation databases, mean values and 95% confidence intervals (CI) of baseline parameters during the first three ICU days after ARDS onset are reported in Table 1. The median and interquartile range (IQR) of MV duration are reported in Table 2. Table 3 shows the performance of the three supervised ML methods for the predictive scenarios in the development database. Table 4 shows the results of external validation of the best prediction model obtained from MIMIC-III to obtain the MV duration prediction in the eICU database. For the development database, the best early ML model for predicting MV duration was obtained by scenario II with RMSE = 6.10 days, using LightGBM algorithm. Figure 1a represents the Bland-Altman plot for LightGBM prediction and truth values in scenario II. For the development database, the best early ML model for predicting MV duration was obtained by scenario II with RMSE = 6.10 days, using LightGBM algorithm. Figure 1a represents the Bland-Altman plot for LightGBM prediction and truth values in scenario II. For the validation database, the best early ML predictive model for MV duration was also observed for scenario II with RMSE = 5.87 days. This finding reinforces the idea that the best early approach for predicting MV duration is to consider the condition of the patient in the second ICU day after ARDS onset, rather than the first ICU day, or both. Figure 1b represents the Bland-Altman plot for prediction and truth values in scenario II using the external validation of LightGBM. The Bland-Altman plots illustrate agreement between the LightGBM models using the development and validation databases. For the validation database, the best early ML predictive model for MV duration was also observed for scenario II with RMSE = 5.87 days. This finding reinforces the idea that the best early approach for predicting MV duration is to consider the condition of the patient in the second ICU day after ARDS onset, rather than the first ICU day, or both. Figure 1b represents the Bland-Altman plot for prediction and truth values in scenario II using the external validation of LightGBM. The Bland-Altman plots illustrate agreement between the LightGBM models using the development and validation databases. Discussion Comparing the difference of RMSE means in the best early scenario (scenario II) with the prediction based on the data of patients in their third ICU day (scenario IV), yields minor RMSE differences (development database: 0.18 day (6.10-5.92) for LightGBM, and validation database: 0.16 day (5.87-5.71)). According to these low differences for both the development and validation datasets, our major finding was that the prediction results of LightGBM models based on the data of the second ICU day (scenario II) are very close to those corresponding results of LightGBM models based on the data of the third ICU day (scenario IV). Consequently, the LightGBM model can accurately predict MV duration without considering/waiting for the data of the third ICU day. This means that MV duration can be predicted earlier, and this could lead to better allocation of MV resources, reducing high acute costs of MV in ARDS, and improving patient care. MV duration beyond 48 h in patients with ARDS provides information about risk factors in those patients [28] and has a direct correlation with ICU costs [4,5]. An early predictive model for MV duration can optimize ICU-level resource utilization [5,33]. Previous attempts to predict MV duration using conventional ICU scores or traditional statistical regression based techniques have proven to be difficult and failed to deal with the diversity of big data in the modern ICU databases [22]. ML is reliable, and it is a non-invasive modality to generate models for predicting MV duration. Most previous works considered a discriminatory prediction model to determine if a patient will remain intubated after a fixed number of days (e.g., 7 days) [22]. By contrast, our approach is numerical, and it predicts the number of MV days earlier by using commonly accessible clinical variables during the first two ICU days. Furthermore, to strengthen the evidence of our results, we used a multicenter database (eICU) for external validation, in which the best model obtained from a single-center database (MIMIC-III) was used to obtain the MV duration prediction in the eICU database. Our findings could be used to facilitate optimal triage, more timely management, and ICU resource utilization [34]. They may also affect some important clinical decisions, including timing of tracheostomy and, potentially, transfers to long-term ventilator weaning units or referral to other centers [13]. Herein, the main objective of using ML was to show that the application of ML is a promising approach to predict MV duration early. The ML contribution in this large study is to demonstrate the applicability of this approach, while not trying to choose the most proper ML model. Furthermore, we believe that the results of an efficient ML technique can yield accurate results for predicting MV duration. In terms of clinical relevance, our ML findings showed that using clinical data from the first ICU day is less predictive than data from the second ICU day. Previous studies showed that the accuracy of intensivists to predict MV duration is limited [13]. However, comparison to other published ML prediction of MV duration is difficult, as we aimed at predicting MV duration for MV >48 h and prior studies predicted for different outcomes under different time frames, in different populations, and using different ML metrics. A recent ML study showed that RMSE for predicting MV duration in ARDS patients for MV >48 h, was 6.23 days [9]. However, this study in [9] had several weaknesses: (1) it ignored the temporal dependency of the longitudinal predictor and treated each observed data point independently, and (2) it was only based on the single-center MIMIC-III database without external validation. Hence, those findings have serious limitations for the generalizability in the context of assessing the prediction of ARDS outcome. From the cost perspective, the mean incremental cost of MV in ICU patients in the US was $1522 per day [4]. For instance, if we compare our findings with the result of the best ML method used in [9], which had a RMSE of 6.23 days, we see that LightGBM approach (the best approach) improved the current state of the art. This improvement can be quantified in terms 0.13 day (6.23-6.10) and about US $198 per patient according to [4]. Developing early predictive models using ML could assist to implement policies for the reduction of high acute care costs in ARDS [3][4][5]. Previous clinical studies showed acute costs incurred by mechanically ventilated ICU patients, but there is a significant difference in costs between ventilated ARDS patients and those without ARDS [35]. More specifically, ARDS diagnosis increases total ICU and hospital costs for mechanically ventilated ICU patients, suggesting higher total costs due to more days on a ventilator, although there is no clear severity-dependent relationship between ARDS severity and incurred costs [35]. The benchmarking of ML algorithms is possible through publicly available databases such as MIMIC-III [19,27] or eICU [19,36]. We acknowledge that our study has several strengths. First, we have analyzed a large population of over 7000 ARDS patients from two ICU databases within the first three ICU days after ARDS onset. Second, we have implemented and externally validated the best ML model (LightGBM) that can predict MV duration early and accurately using commonly accessible clinical variables. Third, early prediction of MV duration can inform population-level ICU resource allocation. Despite its strengths, we also acknowledge some limitations. First, our study is based on a retrospective analysis of data and should be confirmed through further prospective studies. Second, one could argue that the outcome of MV duration is somewhat subjective and could be a function of local practice or intrinsic bias inherent in such critical care decisions. However, our ability to predict a clinically relevant and difficult-to-predict outcome (MV duration) early supports the value of the proposed supervised ML models. Conclusions Predicting MV duration after ARDS onset over time is complex and cannot be adequately performed by critical care physicians. Our findings showed that the ML-based early prediction of MV duration is more accurate when predictive models are based on the clinical features of ARDS patients in the second ICU day after ARDS onset. Institutional Review Board Statement: The datasets used for the analysis in this study are publicly available. Informed Consent Statement: The datasets for the analysis are de-identified. Data Availability Statement: By reasonable request to M.S. and D.R.
2021-09-11T06:17:04.359Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "58d1d9380138b6eebcb1ab8ce2279397a61e4ad8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/17/3824/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04e8e54c1c468cecc2f4c306c3b54f57071d6208", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269830303
pes2o/s2orc
v3-fos-license
Structural, Optical, and Thermal Properties of PVA/SrTiO3/CNT Polymer Nanocomposites Successful preparation of PVA/SrTiO3/CNT polymer nanocomposite films was accomplished via the solution casting method. The structural, optical, and thermal properties of the films were tested by XRD, SEM, FTIR, TGA, and UV-visible spectroscopy. Inclusion of the SrTiO3/CNT nanofillers with a maximum of 1 wt% drastically improved the optical and thermal properties of PVA films. SrTiO3 has a cubic crystal structure, and its average crystal size was found to be 28.75 nm. SEM images showed uniform distribution in the sample with 0.3 wt% of SrTiO3/CNTs in the PVA film, while some agglomerations appeared in the samples of higher SrTiO3/CNT content, i.e., at 0.7 and 1.0 wt%, in the PVA polymer films. The inclusion of SrTiO3/CNTs improved the thermal stability of PVA polymer films. The direct and indirect optical band gaps of the PVA films decreased when increasing the mass of the SrTiO3/CNTs, while the single-oscillator energy (E0) and dispersion energy (Ed) increased. The films’ refractive indices were gradually increased upon increasing the nanofillers’ weight. In addition, improvements in the optical susceptibility and nonlinear refractive indices’ values were also obtained. These films are qualified for optoelectronic applications due to their distinct optical and thermal properties. Introduction Polymers have garnered significant attention from researchers owing to their unique physical and chemical properties, coupled with their affordability, easy fabrication, stability across intermediate temperatures, transparency to visible and infrared wavelengths, and more [1][2][3].Certain polymers like polyvinyl chloride (PVC) and polyvinyl alcohol (PVA) exhibit high transparency, enabling them to transmit a wide range of wavelengths, making them advantageous in various applications.PVA polymers are known for their ease of fabrication, high hydrophilicity, non-toxicity, availability, and dielectric strength, as well as their distinct optical, physical, and chemical properties [4][5][6].Hence, PVA polymer holds promise for various applications due to the aforementioned features.Moreover, the properties of these polymers can be further enhanced through doping with suitable nanoparticles or nanofillers [7].The properties of dopants, including their small size, optimal shape, and expansive surface area, serve as key factors that bolster the properties of the polymer [6]. Various studies have been conducted on polymer nanocomposites; however, the optical properties of PVA polymer have not yet been deeply investigated.The optical and structural properties of PVA were investigated when incorporating a low percentage of graphene oxide (GO) up to 1 wt%.The inclusion of GO into the PVA films reduced their direct and indirect optical band gaps [6].Several studies have investigated the impact of GO on the properties of PVA polymer [8][9][10]. The structural, dielectric, and optical characteristics of PVA polymer were investigated upon incorporation of Ag-BaTiO 3 nanofillers.The resulting PVA-Ag-BaTiO 3 films exhibited favorable dielectric and optical characteristics, suggesting their suitability for electric and Polymers 2024, 16, 1392 2 of 13 optoelectronic applications [11].The structural and optical properties of PVA when doped with PANI/Ag nanoparticles have been studied, and its optical properties have a direct dependency on the nanoparticles' mass [12]. Strontium titanate (SrTiO 3 ) has been employed as a dopant for polymer films due to its distinct characteristics, i.e., a high dielectric constant, low dielectric loss, and minimal leakage current.With a relatively wide band gap energy of approximately 3.2 eV and a cubic perovskite structure at room temperature, SrTiO 3 is anticipated to improve the thermal and optical performance of PVA polymer films when used as a dopant [3,13,14].Hence, previously the thermal and dielectric properties of undoped PVA and doped PVA with SrTiO 3 were examined.Despite the relatively high percentage of SrTiO 3 , ranging from 5 to 15 wt%, the PVA/SrTiO 3 nanocomposites exhibited favorable performance [15]. On other hand, CNTs are expected to cause light scattering within PVA films, thus increasing the path length of photons, particularly in the UV-vis region.CNTs are sheets of graphene formed into tubes which are widely used for structural enhancements.CNTs are hundred-folds stronger than steel [16,17].The inclusion of SrTiO 3 into PVA can cause photon scattering.The presence of SrTiO 3 /CNTs in a polymer film is also expected to enhance the thermal stability of the polymer.A recent study has shown a positive impact on the optical properties of PVDF polymer when doped with SrTiO 3 /CNTs [17].Therefore, using SrTiO 3 /CNTs as dopants in PVA films has a high potential to improve their optical and thermal stability.The influence of SrTiO 3 /CNTs on PVA films has not yet been investigated.Therefore, this work studies the impact of SrTiO 3 /CNT mixtures as dopants on the structural, thermal, and optical properties of PVA polymer films.The PVA polymer films doped with (0.0, 0.3, 0.7, and 1.0 wt%) SrTiO 3 /CNTs were synthesized via the solution casting method.The structural, optical, and thermal properties of these films were measured.Therein, various characterization techniques, i.e., XRD, SEM, TGA, and UV-visible spectroscopy, were utilized.The thermal and optical performance of PVA films were improved upon the inclusion of SrTiO 3 /CNTs.The distinct properties of PVA/SrTiO 3 /CNT polymer nanocomposite films indicate that the films have high potential to be included in optoelectronic applications. Preparation Method and Characterization A mixture of 0.2 g CNTs and 0.8 g SrTiO 3 nanopowder was dissolved into 50 mL of ethanol for an hour.The mixture was exposed to ultrasonic waves for an hour and then dried inside an electric oven under 50 • C. Finally, the leftover product was finely ground.The solution casting technique was utilized to process the polymer films.As such, 1 g of PVA was added to 80 mL of deionized water under magnetic stirring for an hour at ambient temperature to be dissolved.Different contents of SrTiO 3 /CNTs, i.e., 0.003, 0.007, and 0.01 g, were added to the PVA solution samples and stirred regularly for an hour.The resulting solutions, each with a volume of ~80 mL, were poured into glass petri dishes and left to dry in an oven under 80 • C for 48 h.Then, the polymer films were peeled off and taken for characterization.The thickness and dimensions of these films were almost the same, where the average of the thickness was nearly 112 µm.A Shimadzu diffractometer (Kyoto, Japan) with model of 7000 and λ of CuK α radiation = 1.54056Å scanning in the range of 2θ = 10-80 • was used to show the XRD patterns of the polymer films that consisted of the pure PVA and mixed PVA with 0.3, 0.7, and 1 wt% SrTiO 3 /CNTs.A Shimadzu spectrometer (FTIR-Tracer 100, Kyoto, Japan) scanning wavenumbers from 399 to 4000 cm −1 was utilized to record ATR spectra.A Thermo Fisher Quattro ESEM (Thermo Fisher Scientific, Waltham, MA, USA) was also Polymers 2024, 16, 1392 3 of 13 used to produce FESEM micrographs of our samples.This advanced technique was crucial as a source of morphology and microstructure information about the polymer films.The mass loss of the polymer nanocomposite films was examined as function of temperature in the range of 30-600 • C by a Shimadzu TGA-51 Thermogravimetric Analyzer (Kyoto, Japan).A Cary 60 UV-vis spectrophotometer, Agilent (Santa Clara, CA, USA) at a wavelength scan 190-1000 nm was used to provide optical measurements. Structural Characterization XRD as a well-known identification technique, was used to obtain information about the crystal structure of the films.Here, the XRD patterns of pure PVA film, SrTiO 3 /CNTs, and doped PVA with 0.3, 0.7, and 1 wt% SrTiO 3 /CNTs are shown in Figure 1.A broad peak centered at 2θ = 19.4• was observed, resulting from the (101) plane of PVA in its monoclinic crystal structure [18].The small peak appearing at 2θ = 40.6 • indicates the semi-crystallinity structure of the PVA film [19,20].The molecular strong interaction by intermolecular hydrogen bonding formed the nature of the PVA's crystallinity [20]. ples.This advanced technique was crucial as a source of morphology and microstructure information about the polymer films.The mass loss of the polymer nanocomposite films was examined as function of temperature in the range of 30-600 °C by a Shimadzu TGA-51 Thermogravimetric Analyzer (Japan).A Cary 60 UV-vis spectrophotometer, Agilent (Santa Clara, CA, USA) at a wavelength scan 190-1000 nm was used to provide optical measurements. Structural Characterization XRD as a well-known identification technique, was used to obtain information about the crystal structure of the films.Here, the XRD patterns of pure PVA film, SrTiO3/CNTs, and doped PVA with 0.3, 0.7, and 1 wt% SrTiO3/CNTs are shown in Figure 1.A broad peak centered at 2θ = 19.4°was observed, resulting from the (101) plane of PVA in its monoclinic crystal structure [18].The small peak appearing at 2θ = 40.6°indicates the semi-crystallinity structure of the PVA film [19,20].The molecular strong interaction by intermolecular hydrogen bonding formed the nature of the PVA's crystallinity [20]. The average crystal size of the nanoparticles can be calculated by the Debye-Scherrer method (see Equation ( 1)). D is the average crystal size, while λ and β are the wavelength of the X-ray and full width at half maximum, respectively.The SrTiO 3 crystal size is an average of 28.75 nm. The average crystal size of the nanoparticles can be calculated by the Debyemethod (see Equation ( 1)). D = 0.9λ β cos θ D is the average crystal size, while λ and β are the wavelength of the X-ray width at half maximum, respectively.The SrTiO3 crystal size is an average of 28.7 The FTIR spectrum of PVA/SrTiO3/CNTs was measured from 400 cm −1 to 4 as depicted in Figure 2. The FTIR spectrum had a broad absorption band centered at nearly 3270 pure PVA film and doped PVA with different masses of SrTiO3/CNTs.These ban nated from -OH stretching vibration [22].C-H stretching and out-of-plane C-H st caused the peaks located around 2920 and 1426 cm −1 , respectively [23].C-O stret bration caused the band at 1090 cm −1 [24]. The scanning electron micrograph images of PVA and PVA doped with SrTi are shown in Figure 3.A SEM image of pure PVA film is shown in Figure 3a.The of a small content of SrTiO3/CNTs, i.e., 0.3 wt%, showed uniform distribution in polymer film, as depicted in Figure 3b.However, when the dopants' content incr 0.7 and 1 wt%, some agglomerations were formed, as shown in the bottom of the ages in Figure 3c,d.The formation of agglomerations can be attributed to the dop the conditions of the polymer film preparation [18].The increased concentration particles could prevent their uniform dispersion and interfacial interaction betwe mer chains and nanoparticles [3,18,25].The crosslinking between PVA and SrTiO3 assigned to either strong hydrogen bonds or the increased viscosity of the polym composites during the preparation process, and such phenomena have been previ ported in [3,26,27].Figure 3a-d shows the EDS analysis of the included element films incorporating 0.3, 0.7, and 1 wt% of SrTiO3/CNT samples, respectively.T analysis evidences the increment of the SrTiO3 ratios in the prepared PVA polym The corresponding elemental distribution mapping is illustrated in Figure S1 (ES The FTIR spectrum had a broad absorption band centered at nearly 3270 cm −1 for pure PVA film and doped PVA with different masses of SrTiO 3 /CNTs.These bands originated from -OH stretching vibration [22].C-H stretching and out-of-plane C-H stretching caused the peaks located around 2920 and 1426 cm −1 , respectively [23].C-O stretching vibration caused the band at 1090 cm −1 [24]. The scanning electron micrograph images of PVA and PVA doped with SrTiO 3 /CNTs are shown in Figure 3.A SEM image of pure PVA film is shown in Figure 3a.The inclusion of a small content of SrTiO 3 /CNTs, i.e., 0.3 wt%, showed uniform distribution in the PVA polymer film, as depicted in Figure 3b.However, when the dopants' content increased to 0.7 and 1 wt%, some agglomerations were formed, as shown in the bottom of the SEM images in Figure 3c,d.The formation of agglomerations can be attributed to the dopants and the conditions of the polymer film preparation [18].The increased concentration of nanoparticles could prevent their uniform dispersion and interfacial interaction between polymer chains and nanoparticles [3,18,25].The crosslinking between PVA and SrTiO 3 /CNTs is assigned to either strong hydrogen bonds or the increased viscosity of the polymer nanocomposites during the preparation process, and such phenomena have been previously reported in [3,26,27].Figure 3a-d shows the EDS analysis of the included elements in PVA films incorporating 0.3, 0.7, and 1 wt% of SrTiO 3 /CNT samples, respectively.The EDX analysis evidences the increment of the SrTiO 3 ratios in the prepared PVA polymer films.The corresponding elemental distribution mapping is illustrated in Figure S1 (ESI). Thermal Stability TGA analysis was used to investigate the thermal stability of the polymer nanocomposites under different temperatures.The mass losses of pure PVA and PVA doped with SrTiO3/CNT samples were plotted against the temperature, as shown in Figure 4. It is obvious that there were three decomposition stages for all the polymer films.The first small degradation stage occurred in the range of 80-145 °C and is assigned to the evaporation of the solvent.For the pure PVA film, the second degradation stage occurred at 235-417 °C, while the third degradation stage began at 475 °C and continued up to the end of the scale at 600 °C.The doped PVA films with SrTiO3/CNTs also had two degradation stages at 310-424 °C and 500-550 °C.Incorporating the SrTiO3/CNTs into the PVA delayed the thermal degradation, since the second degradation stage of pure PVA began at 235 °C while the doped PVA remained stable up to 310 °C.It is noticeable that small additions of SrTiO3/CNTs with a maximum of 1 wt% kept the PVA films stable for 75 °C more than pure PVA.Therefore, it is obvious the thermal stability of the PVA films was clearly improved when incorporating SrTiO3/CNTs. Thermal Stability TGA analysis was used to investigate the thermal stability of the polymer nanocomposites under different temperatures.The mass losses of pure PVA and PVA doped with SrTiO 3 /CNT samples were plotted against the temperature, as shown in Figure 4. Optical Measurements The optical properties of pure PVA and PVA doped with different masses of SrTiO 3 / CNTs were investigated, and the results of their optical absorbance and transmittance spectra are shown in Figure 5. EER REVIEW 6 of 13 Optical Measurements The optical properties of pure PVA and PVA doped with different masses of SrTiO3/CNTs were investigated, and the results of their optical absorbance and transmittance spectra are shown in Figure 5.The pure PVA film had an absorption band at 280 nm because of the π→π* interband electronic transitions [28,29].Upon the addition of SrTiO3/CNTs into the PVA film, the absorption band slightly blue shifted due to band gap widening, as shown in Figure 5a.The magnitude of the blue shift intensified with higher concentrations of SrTiO3/CNTs, peaking at 6 nm when 1 wt% SrTiO3/CNTs was present.This significant increase suggests the successful preparation of polymer nanocomposites.The increase in the blue shift with higher concentrations of SrTiO3/CNTs can be attributed to the interactions between the nanoparticles and the polymer.As the concentration of SrTiO3/CNTs increases, there is a greater incorporation of these nanoparticles into the polymer matrix.This incorporation alters the optical properties of the nanocomposite material, leading to the absorption or emission wavelengths shifting towards the blue region of the spectrum.The peak shift reaching its maximum at 1 wt% SrTiO3/CNTs indicates optimal nanoparticle dispersion and interaction within the polymer matrix, reflecting the successful preparation of the nanocomposites.Moreover, as the concentration of SrTiO3/CNTs increased, it caused a band defect formation in the PVA film [30].Figure 5b shows the transmittance of undoped PVA as well as doped PVA with SrTiO3/CNTs. It is obvious the transmittance decreased with the increasing mass of SrTiO3/CNTs.The pure PVA film had an absorption band at 280 nm because of the π→π* interband electronic transitions [28,29].Upon the addition of SrTiO 3 /CNTs into the PVA film, the absorption band slightly blue shifted due to band gap widening, as shown in Figure 5a.The magnitude of the blue shift intensified with higher concentrations of SrTiO 3 /CNTs, peaking at 6 nm when 1 wt% SrTiO 3 /CNTs was present.This significant increase suggests the successful preparation of polymer nanocomposites.The increase in the blue shift with higher concentrations of SrTiO 3 /CNTs can be attributed to the interactions between the nanoparticles and the polymer.As the concentration of SrTiO 3 /CNTs increases, there is a greater incorporation of these nanoparticles into the polymer matrix.This incorporation alters the optical properties of the nanocomposite material, leading to the absorption or emission wavelengths shifting towards the blue region of the spectrum.The peak shift reaching its maximum at 1 wt% SrTiO 3 /CNTs indicates optimal nanoparticle dispersion and interaction within the polymer matrix, reflecting the successful preparation of the nanocomposites.Moreover, as the concentration of SrTiO 3 /CNTs increased, it caused a band defect formation in the PVA film [30].Figure 5b shows the transmittance of undoped PVA as well as doped PVA with SrTiO 3 /CNTs. It is obvious the transmittance decreased with the increasing mass of SrTiO 3 /CNTs.This observation is typical, since the inclusion and dispersion of nanoparticles in the Polymers 2024, 16, 1392 7 of 13 polymer films results in the scattering of incident photons.The optical band gap in the polymer film is usually obtained by the Tauc equation (see Equation ( 2)) [31,32]: where α is a coefficient of absorption and B is a constant, while hυ indicates incident photon energy.The n value will be 0.5 in the case of direct allowed transitions and 2 in case of indirect allowed transitions.The direct and indirect band gap values can be estimated from the interception of the extended straight line of the curve to the zero absorption axis in Figure 6a,b.The direct optical band gap for pure and doped PVA with 0.3, 0.7, and 1 w SrTiO3/CNTs gradually decreased to 5.06, 4.86, 4.76, and 4.50 eV, respectively.The ti mass of the dopant in the range of 0.3-1 wt% caused drastic changes in the value of t direct band gap.The indirect optical band gap for pure and doped PVA with 0.3, 0.7 a 1 wt% SrTiO3/CNTs gradually decreased to 5.31, 5.10, 5.08, and 4.92 eV, respectively. Edir and Eind values as listed in Table 1 are decreased with the increments SrTiO3/CNTs.The substantial shift in the energy band gap of PVA, ranging from 5.06 4.50 eV, suggests that SrTiO3/CNT nanoparticles induce modifications in the electron structure of the PVA matrix.This alteration is attributed to the localized electronic sta formed by the incorporated SrTiO3/CNT nanoparticles within the optical band gap PVA, functioning as trapping and recombination centers.Consequently, the observ change in the optical band gap occurs.Additionally, the decrease in the optical band g may be attributed to an increase in the degree of disorder in the samples, resulting fro changes in the polymer structure [33][34][35][36][37]. Investigating the refractive index of the polymer nanocomposite films is meaning for determining their suitable applications.The films with the advantage of a high refra tive index and electrical performance are qualified for optoelectronic applications.Som factors, i.e., molecular structure, nanofillers, film thickness, etc., can influence a film's fractive index [38]. The reflectance and refractive index as function of wavelength are plotted in Figu 7. Reflectance gives information about the amount of reflected light from a surface w respect to incident light.E dir and E ind values as listed in Table 1 are decreased with the increments of SrTiO 3 / CNTs.The substantial shift in the energy band gap of PVA, ranging from 5.06 to 4.50 eV, suggests that SrTiO 3 /CNT nanoparticles induce modifications in the electronic structure of the PVA matrix.This alteration is attributed to the localized electronic states formed by the incorporated SrTiO 3 /CNT nanoparticles within the optical band gap of PVA, functioning as trapping and recombination centers.Consequently, the observed change in the optical band gap occurs.Additionally, the decrease in the optical band gap may be attributed to an increase in the degree of disorder in the samples, resulting from changes in the polymer structure [33][34][35][36][37]. Investigating the refractive index of the polymer nanocomposite films is meaningful for determining their suitable applications.The films with the advantage of a high refractive index and electrical performance are qualified for optoelectronic applications.Some factors, i.e., molecular structure, nanofillers, film thickness, etc., can influence a film's refractive index [38]. The reflectance and refractive index as function of wavelength are plotted in Figure 7. Reflectance gives information about the amount of reflected light from a surface with respect to incident light. Investigating the refractive index of the polymer nanocomposite films is meaningful for determining their suitable applications.The films with the advantage of a high refractive index and electrical performance are qualified for optoelectronic applications.Some factors, i.e., molecular structure, nanofillers, film thickness, etc., can influence a film's refractive index [38]. The reflectance and refractive index as function of wavelength are plotted in Figure 7. Reflectance gives information about the amount of reflected light from a surface with respect to incident light.The changes in the reflectance against the wavelength for the samples of PVA doped with different masses of SrTiO 3 /CNTs are shown in Figure 7a.Equation (3) shows the relationship between reflectance (R) and refractive index (n) [39,40]. where the extinction coefficient is represented by (k = αλ/4π).The refractive indices are gradually increased with the increasing mass of SrTiO 3 /CNTs, as displayed in Figure 6b.The enhancement of the refractive index is attributed to the formation of a large cluster from the gathered SrTiO 3 /CNT nanoparticles [41].The polymer films with a high high-refractiveindex are qualified for optical device applications, i.e., optical coatings, anti-reflection screens, etc.The material dispersion parameters, known as oscillator energy, dispersion energy, and transition moments, can be evaluated from refractive index dispersion as in Equation ( 4), which includes single-oscillator energy (E 0 ) and dispersion energy (E d ) [42]. Figure 8a shows the plot of the (n 2 − 1) −1 vs. (hυ) 2 where E 0 and E d values are obtained from its slope and intercept.The E d values directly increase from 4.69 to 92.57eV when the mass of SrTiO 3 /CNTs increases from 0 to 1 wt%, while the E 0 values show an incremental trend from 4.48 eV to 4.98, as listed in Table 2. The enhanced intermolecular interactions, which resulted from increments in polymer chain packing during orientation, caused a significant increase in E d values [43].In addition, the significant increase in E d values could have occurred due to thermal fluctuations that originated from dispersion forces or long-range van der Waals [44] forces.The single Polymers 2024, 16, 1392 9 of 13 oscillator model, in the case of zero photon energy (hυ = 0), can be utilized to obtain a polymer's static refractive index (n 0 ) [45]; Polymers 2024, 16, x FOR PEER REVIEW 9 of 13 The Er values are relatively close to the values of Eind along with all additions of the dopants.Due to these outcomes, the majority of optical transitions in these films follow the indirect transitions.The oscillator strength (f) of doped polymer films is another important parameter that affects their optical performance.There are some key factors that play major roles in tuning the oscillator strength (f) of the films, such as the chemical structure and the molecular mass of the polymer, film thickness, and molecules' aggregation in the polymer film.Knowing these factors helps to produce polymer film with optimal optical properties for intended applications.Equation ( 5) can be used to calculate the oscillator strength of the polymer film [50]; = (6) The oscillator strength values of the PVA films are summarized in Table 2 and gradually increase with the additions of SrTiO3/CNTs.The molecular electronic structure depends on the chemical structure of the polymer films, and thus linear optical susceptibility (χ (1) ) and third order nonlinear optical susceptibility (χ (3) ) are controlled by their electronic structure.Some applications of the polymer films, i.e., optical data storage and optical switching, rely on their linear and nonlinear optical susceptibilities [51].The calculated n 0 values increased from 1.43 to 4.42 when the dopant (SrTiO 3 /CNTs) mass increased from 0 wt% to 1 wt%.It is well known that polymer films which have a high refractive index are qualified to be applied in optoelectronic devices.The addition of SrTiO 3 /CNTs as a dopant increased the refractive index of the PVA polymer films, and hence increasing the amount of added dopant enhanced the optical properties of the films.Therefore, these prepared films could be employed in display screens, encapsulations of organic light-emitting diodes, image sensors, and fabrication of plastic lenses that are used in eyeglasses [46,47]. Figure 8b shows the plot of the E 2 as optical dielectric loss which is expressed by 2nk vs. hv for the doped PVA films with different masses of SrTiO 3 /CNTs [48].From this figure, the interception of the extrapolated linear part of the curve along the hυ axis gives the values of the real band gap (E r ), which is a crucial parameter in determining the recommended optical transitions [49]. The E r values are relatively close to the values of E ind along with all additions of the dopants.Due to these outcomes, the majority of optical transitions in these films follow the indirect transitions.The oscillator strength (f ) of doped polymer films is another important parameter that affects their optical performance.There are some key factors that play major roles in tuning the oscillator strength (f ) of the films, such as the chemical structure and the molecular mass of the polymer, film thickness, and molecules' aggregation in the polymer film.Knowing these factors helps to produce polymer film with optimal optical properties for intended applications.Equation ( 5) can be used to calculate the oscillator strength of the polymer film [50]; The oscillator strength values of the PVA films are summarized in Table 2 and gradually increase with the additions of SrTiO 3 /CNTs.The molecular electronic structure depends on the chemical structure of the polymer films, and thus linear optical susceptibility (χ (1) ) and third order nonlinear optical susceptibility (χ (3) ) are controlled by their electronic structure.Some applications of the polymer films, i.e., optical data storage and optical switching, rely on their linear and nonlinear optical susceptibilities [51]. The χ (1) and χ (3) values of PVA/SrTiO 3 /CNT films summarized in Table 2 are calculated from Equation (7) [52]; The dopants can interact with the polymer and result in the enhancement of the local electric field as well as create new energy levels, and thus increase χ (3) [53].The polarizability of the molecules is dependent on the chemical structure of the polymer molecules and therefore influences the nonlinear refractive index (n 2 ) [54].The n 2 is a crucial parameter to determine the suitable applications for the prepared polymer films which can be calculated by Equation ( 8) [55,56]; It is obvious that as we increased the content of SrTiO 3 /CNTs, the polarizability of the polymer molecules enhanced and thus increased the nonlinear refractive index (n 2 ), as reported in Table 2.The linear and nonlinear refractive indices of polymer films are improved for the PVA films with the additions of the SrTiO 3 /CNTs.Therefore, the prepared films have high potential to be used in optoelectronic applications.Some optical parameters of PVA/SrTiO 3 /CNT polymer films, i.e., E dir , E ind , n 0 , χ (3) , and n 2 , were used for comparison with previous studies, as reported in Table 3.The value of E dir in this study is relatively smaller than the values of previous studies.The values of n 0 , χ (3) , and n 2 are greater than those reported in previous studies, which demonstrates the preference of PVA/SrTiO 3 /CNT polymer films for optoelectronic applications. Conclusions In conclusion, this study successfully fabricated PVA/SrTiO 3 /CNT polymer nanocomposite films using the solution casting method.The SrTiO 3 nanoparticles exhibited a cubic crystal structure with an average crystal size of 28.75 nm.Incorporation of SrTiO 3 /CNTs up to 1 wt% resulted in an increase in the lattice parameter of SrTiO 3 within the PVA polymer film from 3.91 Å to 3.95 Å.Additionally, the inclusion of SrTiO 3 /CNT nanofillers up to 1 wt% significantly improved the optical properties of the PVA films.Moreover, the thermal stability of the films was enhanced, with the onset of the second degradation stage increasing from 235 • C for pure PVA to 310 • C for doped PVA.SEM images showed a uniform distribution of 0.3 wt% SrTiO 3 /CNTs in the PVA polymer films, while some agglomerations were observed in films containing 0.7 and 1.0 wt% SrTiO 3 /CNTs.The direct and indirect optical band gaps of PVA films decreased with increasing mass percentage of SrTiO 3 /CNTs.Furthermore, the single-oscillator energy (E 0 ), dispersion energy (E d ), and optical susceptibility values increased with increasing nanofiller weight, with calculated n 0 values ranging from 1.43 to 4.42.Additionally, improvements in nonlinear refractive index values were observed due to the inclusion of SrTiO 3 /CNTs.These findings suggest that the fabricated films exhibit promising optical and thermal properties suitable for optoelectronic applications.Future research efforts will focus on investigating the electrical properties of PVA/SrTiO 3 /CNT polymer nanocomposite films through electrical measurements. Figure 1 . Figure 1.The XRD diffraction peaks of pure PVA and doped PVA with SrTiO3/CNTs.Figure 1.The XRD diffraction peaks of pure PVA and doped PVA with SrTiO 3 /CNTs. Figure 2 . Figure 2. The FTIR spectrum of pure PVA and PVA doped with SrTiO3/CNTs. Figure 2 . Figure 2. The FTIR spectrum of pure PVA and PVA doped with SrTiO 3 /CNTs. Figure 3 . Figure 3. SEM scans of (a) pure PVA film and (b-d) PVA films doped with SrTiO3/C Figure 6 . Figure 6.Plots for the PVA films doped with SrTiO 3 /CNTs: (a) (∝hυ) 0.5 vs. hυ and (b) (∝hυ) 2 vs. hυ for the PVA/SrTiO 3 /CNT films.The direct optical band gap for pure and doped PVA with 0.3, 0.7, and 1 wt% SrTiO 3 /CNTs gradually decreased to 5.06, 4.86, 4.76, and 4.50 eV, respectively.The tiny mass of the dopant in the range of 0.3-1 wt% caused drastic changes in the value of the direct band gap.The indirect optical band gap for pure and doped PVA with 0.3, 0.7 and 1 wt% SrTiO 3 /CNTs gradually decreased to 5.31, 5.10, 5.08, and 4.92 eV, respectively.E dir and E ind values as listed in Table1are decreased with the increments of SrTiO 3 / CNTs.The substantial shift in the energy band gap of PVA, ranging from 5.06 to 4.50 eV, suggests that SrTiO 3 /CNT nanoparticles induce modifications in the electronic structure of the PVA matrix.This alteration is attributed to the localized electronic states formed by the incorporated SrTiO 3 /CNT nanoparticles within the optical band gap of PVA, functioning as trapping and recombination centers.Consequently, the observed change in the optical band gap occurs.Additionally, the decrease in the optical band gap may be attributed to an increase in the degree of disorder in the samples, resulting from changes in the polymer structure[33][34][35][36][37]. Figure 7 . Figure 7.The (a) reflectance vs. wavelength and (b) the refractive index vs.wavelength for the PVA/SrTiO3/CNT polymer films. Figure 7 . Figure 7.The (a) reflectance vs. wavelength and (b) the refractive index vs.wavelength for the PVA/SrTiO 3 /CNT polymer films. Figure 8b shows the Figure8bshows the plot of the ɛ2 as optical dielectric loss which is expressed by 2nk vs. hv for the doped PVA films with different masses of SrTiO3/CNTs[48].From this figure, the interception of the extrapolated linear part of the curve along the hυ axis gives the values of the real band gap (Er), which is a crucial parameter in determining the recommended optical transitions[49].
2024-05-18T15:30:47.846Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "5d75e3b846813147d1718a7424982b2e77353ef3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/16/10/1392/pdf?version=1715670645", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4f8b7acbf6169aff1c2caf14262ad1ad41da402", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [] }
88521748
pes2o/s2orc
v3-fos-license
Main and Interaction Effects Selection for Quadratic Discriminant Analysis via Penalized Linear Regression Discriminant analysis is a useful classification method. Variable selection for discriminant analysis is becoming more and more im- portant in a high-dimensional setting. This paper is concerned with the binary-class problems of main and interaction effects selection for the quadratic discriminant analysis. We propose a new penalized quadratic discriminant analysis (QDA) for variable selection in binary classification. Under sparsity assumption on the relevant variables, we conduct a penalized liner regression to derive sparse QDA by plug- ging the main and interaction effects in the model. Then the QDA problem is converted to a penalized sparse ordinary least squares op- timization by using the composite absolute penalties (CAP). Coor- dinate descent algorithm is introduced to solve the convex penalized least squares. The penalized linear regression can simultaneously se- lect the main and interaction effects, and also conduct classification. Compared with the existing methods of variable selection in QDA, the extensive simulation studies and two real data analyses demon- strate that our proposed method works well and is robust in the performance of variable selection and classification. School of Public Health, Capital Medical University ‡ School of Mathematical Sciences, Peking University § Beijing Municipal Key Laboratory of Clinical Epidemiology ¶ Discriminant analysis is a useful classification method. Variable selection for discriminant analysis is becoming more and more important in a high-dimensional setting. This paper is concerned with the binary-class problems of main and interaction effects selection for the quadratic discriminant analysis. We propose a new penalized quadratic discriminant analysis (QDA) for variable selection in binary classification. Under sparsity assumption on the relevant variables, we conduct a penalized liner regression to derive sparse QDA by plugging the main and interaction effects in the model. Then the QDA problem is converted to a penalized sparse ordinary least squares optimization by using the composite absolute penalties (CAP). Coordinate descent algorithm is introduced to solve the convex penalized least squares. The penalized linear regression can simultaneously select the main and interaction effects, and also conduct classification. Compared with the existing methods of variable selection in QDA, the extensive simulation studies and two real data analyses demonstrate that our proposed method works well and is robust in the performance of variable selection and classification. 1. Introduction. Nowadays supervised classification has been an important problem in various medical fields such as genomic, disease diagnosis and brain imaging. Many classification methods have been developed, including linear and quadratic discriminant analysis (LDA and QDA) (Anderson, 1984), k-nearest-neighbors (Fix and Hodges, 1951), logistic regression (Cox, 1958), classification tree (Breiman et al., 1984) and SVM (Boser et al., 1992). The referred methods above are introduced and summarized in the book (Hastie et al., 2009). Among many classification methods, discriminant analysis is widely used in many applications due to simplicity, interpretability, and effectiveness. In many cases, it is believed that only a subset of the available variables (also be called features or predictors) may be contained in the classification structure (or model). When irrelevant predictors are added into the model, they may bring in extra noise, and the classification performance may be degraded due to the unstable and inaccurate estimations of the parameters. Therefore, conducting variable selection before fitting the model is advisable. Variable selection can identify fewer discriminative variables and provide a more accurate classification model to describe the future data. Model selection methods are usually used to carry out the variable selection in a probabilistic framework. A BIC-type criterion for variable selection on quadratic discriminant analysis has been recently studied by Zhang and Wang (2011) and Murphy et al. (2010). The BIC-type model assumes that the relevant variables and irrelevant variables jointly follow a multivariate normal distribution. The relevant variables have different means or covariances in different classes, and irrelevant variables are conditionally independent of the class label. It means that the irrelevant variables can be completely modeled by a multivariate normal distribution conditionally on the relevant variables. The BIC criteria are based on the full likelihood of mixtures of multivariate normal distributions. Zhang and Wang (2011) proposed a standard backward algorithm to find the set of relevant variables, and Murphy et al. (2010) used a forwardbackward algorithm for the variable selection. Zhang and Wang (2011) also showed the BIC's selection consistency under the normal assumption. However, performance may be compromised when this normal assumption does not hold. Moreover, LDA and QDA are inapplicable for the high-dimensional cases when the model dimensionality p exceeds the sample size n, since the sample covariance matrices are consequently singular. Lasso-type regularization methods (Tibshirani, 1996;Zhao and Yu, 2006) are popular in the literature for high-dimensional variable selection. The Lasso-type regularization procedures impose constraints represented by a penalty function, among which L 1 -norm and L 2 -norm penalties have been previously explored for variable selection. Fan and Lv (2010) provided a good review on variable selection and penalty functions. In the high-dimensional classification literature, the Lasso-type regularization methods have been frequently used for variable selection. Among them, Cai and Liu (2011) proposed a direct approach to sparse LDA by estimating the product of precision matrix and the mean vector of two classes, and Mai et al. (2012) also introduced a direct approach to transform the LDA problem to a penalized linear regression. Fan et al. (2015) proposed a two-step procedure to sparse QDA (IIS-SQDA), where an innovated interaction screening approach was explored based on the innovated transform of the precision matrices of two classes in the first step and a sparse quadratic discriminant analysis was presented for further selecting important interactions and main effects and conducting classification simultaneously in the second step. Fan et al. (2015) also proved the consistency of the estimated coefficient vector of QDA, and further showed that the classification error of IIS-SQDA could be infinitely close to the oracle classification error. However, IIS-SQDA is based on the assumption that the variables follow a Gaussian mixture distribution with conditional independence. If the relevant predictors do not follow the normal assumption, many irrelevant predictors can be selected. Even if the relevant predictors and irrelevant predictors are discriminated correctly, the performance of classification may be much compromised. In this work we consider binary classification problem with possibly unequal means or covariance matrices. Under some sparsity assumption on the relevant variables, we suggest using the penalized liner regression to derive sparse QDA by plugging the main and interaction effects in the model. Motivated by the sparse LDA approach explored by the method of sparse LDA in Mai et al. (2012), we transform the QDA problem to a penalized sparse ordinary least squares optimization. We intuitively suppose that an interaction effect should be added to the regression model only after the corresponding main effects. Therefore, we propose using the composite absolute penalties (CAP) which was introduced by Zhao et al. (2009). Coordinate descent algorithm is presented to solve the convex penalized least squares. The penalized linear regression can simultaneously select the main and interaction effects, and also conduct classification. Extensive simulation studies and real data analysis demonstrate that our proposed method works well and is more robust than the existing methods in both the performance of variable selection and classification error. The rest of the paper is organized as follows. Section 2 introduces the discriminant analysis and existing variable selection methods. Section 3 proposes the penalized linear regression of sparse quadratic discriminant analysis. The penalized linear regression is established, where the composite absolute penalty is used to carry out variable selection. The coordinate descent algorithm is presented to solve the penalized least squares optimization. Extensive simulation studies and applications to two real data examples are presented in Section 4 and Section 5, respectively. Section 6 concludes with a discussion. 2. Discriminant analysis and existing variable selection methods. We consider a binary classification problem. Let X ∈ R p be a vector of p continuous predictor variables and G ∈ {1, 2} represents the class label. The quadratic discriminate analysis assumes that P (G = k) = π k > 0 for k = 1, 2 and X|G = k follows a multivariate normal distribution N (µ k , Σ k ), k = 1, 2. Here µ k = (µ k1 , µ k2 , · · · , µ kp ) T ∈ R p and Σ k ∈ R p×p denote the mean vector and covariance matrix for the predictors X in the k-th class,respectively. Then the quadratic discriminant function is where x ∈ R p is the column vector of the predictors for one observation. Let π k ,μ k andΣ k be the estimates of π k , µ k and Σ k . Then the optimal Bayes rule minimizing is to predict the new subject as the class with the maximal discriminant function value, Recently Murphy et al. (2010) and Zhang and Wang (2011) have proposed almost the same variable selection methods based on the BIC criterion for the quadratic discriminant analysis. Let S = {j 1 , · · · , j m } denote a candidate model that contains the X j 1 , · · · , X jm as the relevant predictors, and S c = S F \S, where S F = {1, 2, · · · , p} is the set of all the candidate predictors. The BIC-type criteria are based on the same assumptions: (1) The reverent predictors X S are the smallest set of the candidate predictors which are sufficient for predicting the class label. The assumption can be described by the following equality where X (S) and X (S c ) denote the subvector of the predictors corresponding to the set S and S c . It can be verified that (2.1) is equivalent to saying that the irrelevant predictors are conditionally independent with G given the relevant predictors (2) All of the relevant and irrelevant predictors follow a jointly multivariate normal distribution given the class label. The conditional distribution of the relevant predictors given the class label is where µ k(S) ∈ R |S| , Σ k(S) ∈ R (|S|)×(|S|) is a positive matrix, and |S| is the size of the set S. The conditional distribution of the irrelevant predictors given the relevant predictors is where X i(S) and X i(S c ) denote the subvector of the predictors collected from the ith subject corresponding to the set S and S c . Letθ S be the maximum likelihood estimators. The BIC proposed by Zhang and Wang (2011) and Murphy et al. (2010) based on the full likelihood is defined as where df(S) is the number of parameters needed for the model with selected predictors X (S) . Even though the BIC criterion was proved to be consistent, it is not applicable when the sample size n k is less than the dimension p of the predictors for any class k = 1, 2, and is clearly ill-posed if n k < p. When the sample size n k is less than the dimension p, Fan et al. (2015) proposed a two-step procedure for sparse QDA (IIS-SQDA). For the two-class mixture Gaussian classification, the Bayes rule is an equivalent decision rule of the following form, where ζ is some constant depending only on π k , µ k , Σ k , k = 1, 2. A new observation x is predicted as the class 1 if and only if Q(x) > 0. The first step in IIS-SQDA is to sparsify the support Ω = Σ −1 2 − Σ −1 1 for interaction screening. Two transformations based on the precision matrices are used to find the interaction variables. One regularization method was proposed for further selecting important interactions and main effects in the second step of IIS-SQDA. Although IIS-SQDA is proved to enjoy the sure screening property in selecting the interactions and is close to the the oracle classification in the performance of the misclassification, it depends on the Gaussian mixture assumption and is not robust. Regularization methods and coordinate descent algorithm. The approach to selecting variable proposed in this work is motivated by the Bayes decision function (2.2) for the binary-class QDA. Suppose we numerically code the class labels g = 1 and g = 2, respectively, as y = 1 and y = −1. We use the linear regression model where the predictors are the main and interaction effects of the variables. The coefficients of the linear regression model are estimated by the following least squares , andx i denotes the vector of all interaction effects with the following form For the linear regression model in the variable selection problem, the classical regularized estimates of the parameters β are given by a penalized least squareŝ where λ is a tuning parameter(s) and controls the amount of regularization, and P λ (β) denotes a generic penalty function. We suppose that an interaction effect should be added to the regression model only after the corresponding main effects. It means that the penalty for the interactions should be larger than that for the main effects. Thus we make the constraint λ 1 /λ 2 > p for the two tuning parameters in the penalty function. The constraint for the tuning parameters is also identical to the IIS-SQDA (Fan et al., 2015). Hence CAP-SQDA proposed in this work is able to adaptively and automatically choose between sparse QDA and sparse LDA by using the penalized linear regression. In optimizing the penalized linear regression, we always center each predictor variable. When we center all variables (including all predictors x i and x i and all codes y i for i = 1, 2, · · · , n), the optimum value of β 0 is 0 for all values λ 1 and λ 2 . Then the optimization for CAP-SQDA can be expressed in the more explicit form as Zhao et al. (2009) proposed using the BLASSO algorithm to compute CAP estimates in general. However, the BLASSO algorithm is tries to solve the whole solution path and so is only applicable for one tuning parameter. There are two tuning parameters in our proposed optimization problem (3.2). Therefore BLASSO is not appropriate for the optimization (3.2). Cyclical coordinate descent methods are natural approaches for solving convex problems ℓ 1 and ℓ 2 constraints. These methods have been widely proposed for the lasso-type regularization problems, including the classical sparse group lasso (Friedman et al., 2008), and the glmnet (Friedman et al., 2010). We also use the coordinate descent method to solve the penalized least squares problem (3.2). The coordinate descent algorithm for solving the optimization (3.2) can be converted to a general one-dimensional optimization where a ≥ 0, c ≥ 0, d ≥ 0, e j > 0 and s ∈ {0, 1, 2, · · · , p}. If |b| ≤ c, the minimizer is easily seen to beθ = 0. If |b| > c, the onedimensional optimization (3.3) can be solved by Newton's type method or the optimize function in the R packages, which is a combination of golden section search and successive parabolic interpolation. Specially, if b > c, the minimizerθ lies in the interval c To apply the one-dimensional optimization (3.3) in the CAP-SQDA, de-note the following matrices wherep = p(p + 1)/2. We also need to compute the following products in the coordinate descent algorithm For the update of the main effect parameters β k , the parameters in Equation (3.2) a, b, c, d, e j can be calculated as where I(·) denotes the indictor function, β ⊙\k the subvector of β ⊙ removing the kth element, G k,\k the kth row of G with the kth element removed, B k the kth row of B, and C k the kth element of C. For the update of the interaction effect parameter β k,l , the parameters in Equation (3.3) a, b, c, d, e j can be calculated as where B m denotes the mth column of B. This leads to the following algorithm: Step ⊗ ). Step 2: For the updated estimate of β k in the tth loop, fix β l , l = k and β k,l , and calculate a, b, c, d, e j ; if |b| < c, set β (t+1) k = 0; otherwise minimize the optimization (3.3) and obtain the update β (t+1) k of β k . Update all the parameters β k , 1 ≤ k ≤ p and β k,l , 1 ≤ k ≤ l ≤ p in order. Step 3: Iterate the entire step (2) over t = 1, 2, · · · until convergence. Let λ = (λ 1 , λ 2 ), and denoteβ 0 ,β ⊙ (λ),β ⊗ (λ) as the estimates of the parameters β 0 , β ⊙ , β ⊗ respectively by solving the penalized linear regression problem. Then the classification rule is to assign a new observation z ∈ R p to class 1 if and only ifβ 0 + z Tβ ⊙ (λ) +z T β ⊗ (λ) > 0 wherez is the corresponding interaction effects to z. In practice, we need to select a good tuning parameter such that the misclassification error is as small as possible. Five cross-validation (CV) is a popular method for tuning, and hence we use it here. Note that there are two tuning parameters in the CAP-SQDA, so the detail of CV can be referred to as the elastic net (Zou and Hasite, 2005). The Lasso-type estimates are generally biased. Therefore, we suggest that OLS is used in the CAP-QDA if the dimension of the active main and interaction effects in the penalized linear regression is smaller than the sample size. The active sets for the main and interaction effects are If |S 1 | + |S 2 | < n, the OLS estimates of β ⊙S 1 and β ⊗S 2 have the following form (β ols ⊙S 1 ,β ols Then if the dimension of the selected effects is smaller than the sample size, the sparse QDA classifier is defined as follows: assigning the new observation z and the corresponding interactionsz to class 1 if 4. Simulation studies. In this section, a number of simulations are conducted to compare the performance of the proposed methods with the two methods based on the full likelihood BIC: the backward procedure presented in Zhang and Wang (2011), denoted by BIC b method, and the forward-backward procedure proposed in Murphy et al. (2010), denoted by BIC f b method. Five simulation experiments are considered. In the first three experiments named Model 1 -Model 3, the predictors are generated from the multivariate normal distributions, and the predictors in the other two experiments named Model 4 and Model 5 are not multivariate normally distributed random variables. In each simulation experiment, we consider low-dimensional settings with p = 20 and high-dimensional settings with p = 100, 200. For each simulation experiment setting, 50 observations for each class generated from the true model are served as the training data while 5000 extra independent observations for each class are served as the testing data. For comparison, we consider five performance measures including misclassification rate (MR), the numbers of irrelevant main effects (FP.main) and irrelevant interaction effects (FP.inter) falsely included in the classification rule, and the numbers of relevant main effects (FN.main) and interaction effects (FN.inter) falsely excluded in the classification rule. The five performance measures are the same as the classification and variable selection performances employed in Fan et al. (2015). (1) Model 1: The relevant predictors are X S = {X 1 , X 2 }. For the first class, the mean vector and covariance matrix are µ 1,S = (2.5, −1) T and Σ 1,S = [1, 0; 0, 1] ∈ R 2 , respectively; for the second class, they are µ 2,S = (−0.5, 0) T and Σ 2,S = [3, 1; 1, 3] ∈ R 2 , respectively. The remaining p − 2 variables are independently and independently generated as This model is borrowed from Zhang and Wang (2011). There are two main effects and three interaction terms in the Bayes rules for Model 1. There is small difference between the two covariance matrices of the predictors for two classes. It means that the interaction effects are weak in Model 1. Table 1 inter are approximately equal to 3. The reason is that model 1 is similar to a LDA model. BIC criterion is a method for variable selection rather than effect selection essentially, therefore, presents the best performance in terms of FN.inter. The results in Table 1 demonstrate that our proposed method CAP-SQDA can effectively select important effects and conduct classification simultaneously for Model 1 under all dimensional settings. (5) Model 5: The first five variables are generated as in Model 4. The remaining p − 5 are irrelevant. The next p/2 − 5 variables are independently generated from N (μ, 1), and the remaining p/2 variables are independently generated from Beta distribution B(ν, 0.5), whereμ and ν are U (0, 1) and U (1, 5) random variables, respectively. Table 4 summarizes the classification results for Model 4 and Model 5. We not only give the misclassification rates, but also report the values of MR of CAP-SQDA minus the ones of BIC and IIS-SQDA(MRM). According to Table 4, although the misclassification rates of all methods in models 4 − 5 are obviously larger than those of the ORACLE classifier when the assumption of mixed gaussian distribution for all variables does not hold, CAP-SQDA exhibits the best performance in terms of MR across all settings. In some settings such as p = 100, 200 for Model 5, MRs of CAP-SQDA are significantly smaller than the ones of BIC and IIS-SQDA. The results demonstrate that our proposed method is robust for QDA in classification. Application. 5.1. parkinson dataset. We apply the classification methods to the parkinson dataset shared in UCI in 2008 (Little et al., 2007). This dataset is composed of a range of biomedical voice measurements from normal people and Parkinson's disease (PD) patients. There are p = 22 predictors in this dataset. The main aim is to discriminate healthy people from those with PD. There are n 1 = 147 for the PD and n 2 = 48. We randomly split 195 samples into a training set consisting of 73 samples from PD and 24 samples from the healthy. For each split, we applied five different methods to the training data and then calculated the classification error using the test data. The tuning parameters are selected via the five-fold cross validation. We repeated the random splitting for 100 times. The means and standard errors of classification errors and model sizes for different classification methods are summarized in Table 5. FULL method has the worst performance in the classification. It means that the variable selection is necessary for the analysis of the parkinson dataset. BIC b method and BIC f b method select on average 10.63 and 7.88 variables, respectively. IIS-SQDA method has the smallest MR, but select 11.20 variables and 20.94 effects. Our proposed method selects the smallest numbers of variables and effects and and achieves very close classification accuracy compared with the IIS-SQDA method. 5.2. Breast cancer dataset. The breast cancer dataset consists of the gene expressions from 77 patients, originally studied in Van et al. (2002). The goal is to predict whether a female breast cancer patient relapses from gene expression data. The dataset contains a total of 78 samples, with 44 of them in the good prognosis group and 34 of them in the poor prognosis group. Since there are some missing values with one patient in the poor prognosis group, it was removed in the study Fan et al. (2015). Same as Fan et al. (2015), we use the p = 231 genes in Van et al. (2002) and randomly split the 77 samples into a training set and a test set. 26 samples from the good prognosis group and 19 samples from the poor prognosis group are randomly selected in the training set randomly. We apply the classification methods and the results are summarized in Table 6. For the analysis of the breast cancer dataset, BIC b method and BIC f b method select on average 10.63 and 7.88 variables, respectively. IIS-SQDA method has the lower MR, but select 11.20 variables and 20.94 effects. Our proposed method selects the smallest number of variables and effects and achieves high classification accuracy. BIC f b selects the smallest variables, but has the largest MR. IIS-SQDA method achieves the lowest MN, but selects the largest number of variables and effects. where both of the numbers of the Main effects and All effects are larger than the sample size of the training set. Our proposal method misclassifies 2.46 clinical outcomes more than IIS-method, whereas it selects nearly half the number of effects fewer than the IIS-method. Both our proposal method and the IIS-method select very fewer interaction effects. It means that a sparse LDA is suitable for the breast cancer dataset. In the study of Fan et al. (2015), the penalized logistic regression analysis with the main effects only also has high classification accuracy with MN = 6.95. It demonstrate that our proposal method can adaptively and automatically choose between the sparse LDA and the sparse QDA. 6. Conclusion. In this paper we propose a penalized linear regression,named CAP-SQDA, for quadratic discriminant analysis with two classes, and develop a coordinate descent algorithm to solve the penalized leastsquares problem. The proposed procedure first transform the sparse QDA problem to a penalized sparse ordinary least squares optimization by using composite absolute penalty, and apply main effect and interaction selection through regularization. The efficiency and robustness of CAP-SQDA have been demonstrated through simulation studies and real data analysis through comparison it with other methods. For real datasets CAP-SQDA usually selects much few variables while achieves high classification accuracy. In the future study, it would be interesting to generalize the proposed method to problems for quadratic discriminant analysis of multi-class classification. The key of the variable selection for the multi-class quadratic discriminant analysis is to propose a new composite penalty. In addition, developing an efficient computing method is the need of CAP-SQDA for ultrahigh-dimensional data analysis.
2017-02-15T12:13:05.000Z
2017-02-15T00:00:00.000
{ "year": 2017, "sha1": "c004f7cf7089aff80d12b6a00f316c95d0664b03", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c004f7cf7089aff80d12b6a00f316c95d0664b03", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
139366132
pes2o/s2orc
v3-fos-license
Synthesis and Reduction of Bimetallic Methyl-Bridged Rare-Earth Metal Complexes, [(C5H4SiMe3)2Ln(μ-CH3)]2 (Ln = Y, Tb, Dy) The complexes [Cp′2Ln(μ-CH3)]2, (Cp′ = C5H4SiMe3; Ln = Y, Tb, Dy) were reduced to determine if these methyl-bridged complexes would form mixed valent 4fn5d1 Ln(II)/4fn Ln(III) compounds or bimetallic 4fn5d1 Ln(II) compounds containing 5d1–5d1 metal–metal bonds upon reduction. Reaction of the known bridged-chloride complexes, [Cp′2Ln(μ-Cl)]2, 1-Ln (Ln = Y, Tb, Dy), with MeLi forms the bridged-methyl complexes [Cp′2Ln(μ-CH3)]2, 2-Ln, which were crystallographically characterized for Tb and Dy. KC8 reduction of 2-Ln in the presence of 2.2.2-cryptand produced 3-Y, 3-Tb, and 3-Dy, which exhibited intense dark colors and broad absorbance peaks around 400 nm with molar extinction coefficients of 1700, 2300, and 1800 M–1 cm–1, respectively, which are characteristics of Ln(II) ions. The dark maroon 3-Y product had an axial electron paramagnetic resonance spectrum at 77 K (g1 = 1.99, A1 = 17.9 G; g2 = 2.00, A2 = 17.7 G) and a two-line isotropic spectrum at 273 K (g = 1.99, A = 18.4 G), which indicates that an Y(II) ion is present. Although these results are indicative of Ln(II) ions present in the solution, crystallographic evidence was not obtained to establish the structure of these complexes. ■ INTRODUCTION The +2 oxidation state has recently been identified in crystallographically characterizable monometallic molecular complexes of yttrium and all of the lanthanides (except radioactive Pm). This was accomplished using the coordination environment of three trimethylsilyl-substituted cyclopentadienyl ligands, as shown in eq 1. 1−6 Previously, the +2 oxidation state in molecular species had been limited to Eu, Yb, Sm, Tm, Dy, and Nd, which had 4f n+1 electron configurations obtained by reducing a 4f n Ln(III) precursor. 7−10 Spectroscopic and structural characterizations as well as density functional theory (DFT) calculations have shown that in the trigonal coordination environment of a [(C 5 H 3 RR′) 3 ] −3 ligand set (R = H, SiMe 3 , R′ = SiMe 3 ), the new lanthanide ions exhibited a 4f n 5d 1 ground state and 4d 1 configuration for yttrium. 1−6 Examples of these new +2 ions are also known with other ligands including C 5 The discovery of the 4f n 5d 1 electron configurations in these new ions presents the possibility of generating complexes which contain Ln−Ln bonds. Traditionally, metal−metal bonding between two lanthanides has been considered unlikely because the 4f orbitals have too limited a radical extension from the nucleus to have significant overlap. However, electrons in d orbitals are well suited to make metal−metal bonds. Since the complexes of new 4f n 5d 1 Ln(II) ions involve monometallic complexes with sterically crowded ligand environments, they are not ideal for placing two lanthanides in close proximity to bond. To address this problem, reductions of [Cp′ 2 Y(μ−Cl)] 2 and [Cp′ 2 Y(μ−H)(THF)] 2 were explored. 18 DFT calculations suggested that reduction of these bimetallic complexes could form complexes containing Y−Y bonds. 18 Although spectroscopic studies, including electron paramagnetic resonance (EPR) spectroscopy and UV−vis spectroscopy, of the reductions of these complexes suggested the formation of an Y(II) ion, 6 3 (μ-H)} (Ln = Y, Tb, Dy), were isolated from the reduction reactions. 18 The reduction of [Cp′ 2 Y(μ−Cl)] 2 provided an EPR spectrum at 77 K consistent with an Y(II) ion. However, this complex was extremely thermally unstable, which prohibited the collection of a room-temperature EPR spectrum, and a crystal structure could not be obtained. To avoid the possibility of forming such trimetallic hydridecentered products, the reduction of complexes containing electron-deficient methyl-bridged ligands, [Cp′ 2 Ln(μ−CH 3 )] 2 , has been explored and is reported here. Reduction of the known complex, [Cp′ 2 Y(μ−CH 3 )] 2 , 20 is described as well as analogous reactions of the Tb and Dy analogues, which were synthesized for this study and crystallographically characterized for definitive identification. Reduction of the 2-Ln complexes with 1 equiv of KC 8 in the presence of 1 equiv of 2.2.2-cryptand (crypt) in THF produced intensely colored red-brown products, 3-Lneq 2, typical of crystallographically characterized complexes of rare-earth ions in the +2 oxidation state, eq 1. 2−4,6,18 The EPR spectrum of 3-Y at 77 K has an axial signal at g 1 = 1.99, A 1 = 17.9 G; g 2 = 2.00, A 2 = 17.7 G, and the room-temperature EPR spectrum has an isotropic signal at g = 1.99, A = 18.4 G (Figure 2). These patterns are consistent with the interaction of an unpaired electron with a 89 Y nucleus (100% abundant I = 1/2) and are characteristic of Y(II) complexes ( Figure 2). [3][4][5]11 The UV−vis spectra of 3-Y, 3-Tb, and 3-Dy, Figure 3 The dark-colored solutions of 3-Ln maintained their color in solution at −30°C for several days, but they did not yield crystalline products suitable for definitive characterization by X-ray diffraction. Attempts to make analogues with 18-crown-6 instead of crypt were similarly unsuccessful. ■ CONCLUSIONS Although the bridged-methyl complexes [Cp′ 2 Ln(μ-CH 3 )] 2 , 2-Ln, can be reduced to form dark solutions with EPR and UV− vis spectroscopic features consistent with Ln(II), isolation of crystallographically characterizable Ln(II) complexes has not been possible. These results indicate that the bis(Cp′) coordination environment is not optimum for crystallizing bimetallic complexes with a rare-earth metal in the +2 oxidation state. ■ EXPERIMENTAL DETAILS All manipulations and syntheses described below were conducted with the rigorous exclusion of air and water using standard Schlenk line and glovebox techniques under an argon atmosphere. Solvents were sparged with UHP argon and dried by passage through columns containing Q-5 and molecular sieves prior to use. Elemental analyses were conducted on a Perkin-Elmer 2400 Series II CHNS elemental analyzer. UV− vis spectra were collected at 298 K using a Jasco V-670 absorption spectrometer. EPR spectra were collected using Xband frequency (9.3−9.8 GHz) on a Bruker EMX spectrometer equipped with an ER041XG microwave bridge, and the magnetic field was calibrated with DPPH (g = 2.0036). Infrared (IR) transmittance measurements were taken as compressed solids on a Cary 630 spectrophotometer with a diamond ATR attachment. [Cp′ 2 Dy(μ-CH 3 )] 2 , 2-Dy. A solution (−30°C) of 1-Dy (150 mg, 0.158 mmol) in Et 2 O (5 mL) was slowly added to a −30°C slurry of MeLi (12 mg, 0.546 mmol) in Et 2 O (5 mL). The total volume of the solution was increased to 20 mL with cold Et 2 O. The slightly cloudy colorless solution was allowed to warm to room temperature and stirred overnight. The volatiles were removed from the solution in vacuo. The product was extracted into hexane (10 mL) and then centrifuged to remove white solids, presumably LiCl. The white solids were washed with hexane (5 mL) twice. The volatiles were removed from the hexane solution to isolate a white powder. The white powder was extracted into hexane (10 mL) again, and any insoluble materials were discarded. The volatiles were removed from the hexane solution to isolate 2-Dy as a white powder (123 mg, 0.136 mmol, 86%). X-ray quality crystals were grown from a concentrated hexane solution at −30°C. Reduction of 2-Y. In an argon-filled glovebox, a colorless solution of 2-Y (106 mg, 0.140 mmol) and crypt (53 mg, 0.141 mmol) in THF (2 mL) was chilled to −30°C. The solution was passed through a KC 8 column chilled to −30°C, and a dark red-brown solution, 3-Y, resulted. EPR spectra were obtained at 77 K and room temperature. UV−vis (THF) λ max = 420 nm; ε = 1700 M −1 cm −1 . Reduction of 2-Tb. In a procedure analogous to the reduction of 2-Y, 2-Tb (200 mg, 0.224 mmol) and crypt (84 mg, 0.224 mmol) were combined in THF (2 mL) and passed through a KC 8
2019-08-18T15:53:01.865Z
2019-01-07T00:00:00.000
{ "year": 2019, "sha1": "fd2a7630beab4ca2580f3beea836975ce0b87dc1", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b02665", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b19240618d90403adaaf14fd5b255b11e6ee59b1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
254854280
pes2o/s2orc
v3-fos-license
Bistable perception, precision and neuromodulation Abstract Bistable perception follows from observing a static, ambiguous, (visual) stimulus with two possible interpretations. Here, we present an active (Bayesian) inference account of bistable perception and posit that perceptual transitions between different interpretations (i.e. inferences) of the same stimulus ensue from specific eye movements that shift the focus to a different visual feature. Formally, these inferences are a consequence of precision control that determines how confident beliefs are and change the frequency with which one can perceive—and alternate between—two distinct percepts. We hypothesized that there are multiple, but distinct, ways in which precision modulation can interact to give rise to a similar frequency of bistable perception. We validated this using numerical simulations of the Necker cube paradigm and demonstrate the multiple routes that underwrite the frequency of perceptual alternation. Our results provide an (enactive) computational account of the intricate precision balance underwriting bistable perception. Importantly, these precision parameters can be considered the computational homologs of particular neurotransmitters—i.e. acetylcholine, noradrenaline, dopamine—that have been previously implicated in controlling bistable perception, providing a computational link between the neurochemistry and perception. Introduction Bistable perception ensues from observing a static, ambiguous stimulus with two possible interpretations e.g., the Necker cube or Rubin's vase.Here, alternation of the visual percept arises when the stimulus offers two distinct explanations that cannot be perceived simultaneously (Brascamp et al., 2018).For example, whilst observing Rubin's vase, individuals switch between perceiving a black vase or two facial profiles.Experimentally, it has been shown that neurotransmitters are crucial for modulating this phenomenon (van Loon et al., 2013) specifically, implicating catecholaminergic (Pfeffer et al., 2018), dopaminergic (Schmack et al., 2013), cholinergic (Sheynin et al., 2020), and noradrenergic (Einhauser et al., 2008) 1 neurotransmission in modulating the frequency of perceptual switching.In this study, we provide a computational account of how these particular neurotransmitters can influence bistable perception.For this, we rely on how their computational homologuesi.e., precision modulation under active (Bayesian) inference (Parr & Friston, 2017)can induce perceptual alternation. Active inference is a Bayesian formulation of brain function that casts perception and action as 'selfevidencing' (Hohwy, 2016); or minimising free energy across time (Da Costa et al., 2020;Friston, 2019;Friston et al., 2017;Kaplan & Friston, 2018).It characterises perception as an inferential process (Clark, 2013), across the space of all possible hypotheses that could have given rise to a particular stimulus (Friston, 2005).These inferences are a consequence of how confident (or precise) beliefs are over particular model distributions.Broadly, such models comprise sequences of 'hidden' states or causes which generate observable sensory data.For example, if the probability of a sensory input given its cause is extremely precise, then one can confidently attribute that sensory observation to a particular cause.Contrariwise, an imprecise probability distribution implies an ambiguous association between cause and effect and sensory observations can do little to resolve the uncertainty about their causes.This is precisely why precision control can influence the type of inferences made and induce bistable perception by mimicking the role of specific neuromodulators (Moran et al., 2013;Parr et al., 2018;Schwartenbeck et al., 2015;Vincent et al., 2019). Here, we use particular precision parameters to investigate the computational mechanisms that underwrite bistable perception.We hypothesised that there are multiple, but distinct, ways in which precision control can interact to give rise to bistable perception.These precision manipulations influence the frequency with which one can perceive (and alternate between) two distinct percepts and speak to an intricate precision balance underwriting bistable perception.Explicitly, we evaluate multiple combinations of precision, over three distinct model distributions, that may give rise to bistable perception.These are i) sensory precision, ii) precision over state transitions, and iii) precision over probable action plans, as these are thought to be mediated by acetylcholine (Moran et al., 2013;Parr et al., 2018), noradrenaline (Vincent et al., 2019), and dopamine (Schwartenbeck et al., 2015), respectively. To demonstrate perceptual switchingas a function of various precisionswe instantiate an active inference model 2 of the Necker cube paradigm (Gregory, 1980).In this example, the agent is presented with an ambiguous, static image, i.e., the Necker cube, and infers its cause; namely, a cube facing either to the right or left.How quickly and often the agent alternates between the two inferred (i.e., perceived) orientations is determined by the confidence with which particular beliefs are updatedmodulated by the different precision parameters.We discuss the correspondence between these precision terms, their neuromodulatory homologues and role in facilitating bistable perception in Table 1.Inevitably, these 1 Studied indirectly via pupil dilationsee Larsen, R. S., & Waters, J. (2018).Neuromodulatory Correlates of Pupil Dilation [Mini Review]. Frontiers in Neural Circuits,12(21).https://doi.org/10.3389/fncir.2018.00021 . 2 Here, we will use agent and model to mean one and the same thing.associations are vast oversimplifications.However, they are useful heuristics that appear to be consistent with much of the data on neuromodulatory function.Briefly, sensory precision (i.e., the likelihood function) determines the confidence in beliefs about causes of outcomes and can be associated with (selective) attention (Mirza et al., 2019).Similarly, precision over state transitions models the volatility of hidden states.If this is extremely precise, the agent would have high confidence about the evolution of states over time.Conversely, with a low state transition precision, the agent's beliefs about future states would become progressively more uncertain (i.e., high Shannon entropy).Lastly, the precision over probable action plans (i.e., policy selection) determines the confidence in the selected action trajectory, or policy.We expected that increasing each of these precisions would decrease the frequency of visual perception alternation induced by precise beliefs over the perceived orientation (or the visual context), independently of the other precision terms.Since all precision terms were hypothesised to induce similar consequence on switching rate (see Table 1), we analysed posterior probability of the cube's orientation after the switch occurred to provide a dissociable account of these precision manipulations.In other words, the differential effects of the precision manipulations were assessed in terms of what the synthetic subject 'believed' at the time of each perceptual switch.Policy precision is linked to confidence about actions (i.e., eye movements) and can decrease the switch frequency. This paper is structured as follows.First, we review formal (i.e., computational) accounts of bistable perception.Next, we briefly introduce active inference with a special focus on precision.This provides a nice segway to introduce our generative model for simulating bistable perception of the Necker's cube; a canonical paradigm in the bistable perception literature (Choi et al., 2020;Kornmeier & Bach, 2005;Wernery et al., 2015).The model is then used to test our hypotheses regarding the multiple, and distinct routes through which bistable perception can arise.Finally, we discuss the results to understand how our simulated manipulations of precision relates to neuromodulation in the brain. Computational accounts of bistable perception Previously, there have been many attempts to account for bistable perception phenomena ranging from dynamical systems models (Fürstenau, 2010(Fürstenau, , 2014) ) through to predictive processing frameworks (Hohwy et al., 2008).The latter explanation include a formulation (Weilnhammer et al., 2017) that characterise perceptual switches as a consequence of prediction errors emerging from residual evidence for the suppressed percept.In this account, bistable perception emerges from a progressive increase of the prediction error not explained by the extant percept, engendering the alternate explanation.We extend this account of bistable perception using active inference.Explicitly, we illustrate that variations in precision (over distinct model parameters) can give rise to bistable perception by influencing how confidently sensory observations are inferred. Our account is also aligned with another model of bistable perception introduced by (Weilnhammer et al., 2021).They observed that bistable perception emerged from a fluctuation in the sensory information available to the brain.This fluctuation can be explained by saccadic suppression-the suppression of sensory pathways during saccades (Crevecoeur & Kording, 2017)-and can lead to increased perceptual alternation.Under predictive processing accounts, this suppression relies upon changes in the precision the brain assigns to sensory data at different times during the action-perception cycle.This highlights that eye movements are necessary for understanding bistable perception and can therefore provide behavioural evidence of sensory precision modulations.Importantly, this aligns with our model by demonstrating that bistable perception is i) modulated via different levels of (sensory) precision and ii) experimentally linked to eye movements; namely, active vision or inference. Separately, (Parr et al., 2019) used active inference to investigate the computational mechanisms that underwrite bistable perception.They postulated that bistable perception is a consequence of alternations in (covert) attentional deployment towards certain stimulus features when two different percepts may be supported by different stimulus features (e.g., luminance contrast at different places in the visual field).The alternation is a consequence of accumulation of uncertainty about the percept relating to the unattended features.By choosing to deploy attention to resolve this uncertainty, we switch our focus and therefore our percept.The numerical experiments accompanying this hypothesis showed that changes in different precision-parameters influenced the frequency of transitions, given the inferences being made.This process has been linked to eye movements focusing on distinct parts of the illusory object which is in line with a call for active vision formulations of bistable perception (Safavi & Dayan, 2022). Bistable perception, precision modulation and active inference Here, we briefly describe active inference and precision parameters that underwrite the computational mechanisms that may give rise to bistable perception. Active inference Active inference, a corollary of the free energy principle, is a formal way to describe the behaviour of self-organising (random dynamical) systems that exchange with an external environment.It postulates that these systems self-organise by minimising their surprisal about sensory observations 3 ( o ), i.e., maximising their (Bayesian) model evidence (Friston et al., 2010;Sajid, Da Costa, et al., 2021) or 'selfevidencing' (Hohwy, 2016).Formally, this involves the optimisation of a free energy functional i.e., an upper bound on surprisal (Beal, 2003;Da Costa et al., 2020;Sajid, Ball, et al., 2021).This functional can be decomposed in terms of complexity and accuracy, and its minimisation thus means finding an accurate explanation for sensory observations that incurs the least complexity cost: Here, DKL is the Kullback-Leibler divergence, o and s refer to the outcome and hidden states (or causes), respectively.Free energy depends upon a generative model that comprises a probability distribution P that describes the joint probability of (unobserved) causes and (observed) consequences.This generative model is usually specified in terms of a (likelihood) mapping from hidden causes to outcomes and priors over the hidden causes.The approximate posterior distribution Q in (1) expresses the (posterior) probabilities of hypotheses about hidden states, based on the agent's observations.Uncertainty about anticipated observations is reduced by selecting policies (i.e., probable action trajectories; π) that a priori minimise the expected free energy ( G ) 4 (Parr & Friston, 2019): Where π refers to a policy, τ is a (future) time-step.The expected free energy equips the agent with a formal way to assess different policies in terms of how likely they are to fulfil an agent's preferences and information gain about the hidden states of the world.A policy is then selected based on the expected free energy of each policy, which is modulated by the precision parameter : (3) Thus, the higher the value of , the more precise beliefs about actions.In other words, policy selection becomes more confident.In summary, active inference dictates that (variational and expected) free energy is minimised under a particular model of the environment i.e., a generative model (Friston et al., 2017).These generative models encode particular hypotheses about the current states of affairs.Practically, the model is realised as a partially observable Markov decision process (POMDP) with the assumption that discrete outcomes are caused by discrete hidden statesfor technical details see (Da Costa et al., 2020).These models describe the statistical nature of the environment in terms of probability distributions: 3 Surprisal is the negative logarithm of an outcome probability, i.e., −ln P(o). 4 For the technical reader, note the resemblance to the terms in Eq.1, but the supplementation of the expectation under the approximate posterior with the likelihood, resulting in the following predictive distribution: . This treats planning as inference: Attias, H. (2003).Planning by Probabilistic Inference.Proc. of the 9th Int.Workshop on Artificial Intelligence and Statistics, , Botvinick, M., & Toussaint, M. (2012).Planning as inference.Trends Cogn Sci.,16(10),[485][486][487][488] i.e., we can evaluate plausible policies before outcomes have been observed. The A parameter encodes the probability distribution of state-outcome pairs (i.e., likelihood distribution), and B encodes the probability distribution of hidden states transitions (i.e., the transition distribution).Both are specified as categorical distributions.Precision terms ,,    are inverse temperature parameters.With high precision, the category with the highest probability converges to 1, whereas for low precision, categories tend to have equal probability (Parr & Friston, 2017;Sajid, Parr, Gajardo-Vidal, et al., 2020).The above probability distributions describe transitions between states in the environment that generate outcomes.Their transitions depend on actions, which are sampled from the posterior beliefs over the policies.Consequently, the sampled actions change the state of the world, giving rise to new outcomes; and continuing the perception-action loop.For the purposes of this paper, we will assume priors over the precision parameters are themselves infinitely precise. Precision modulation We posit that these precision parameters ( ,,    ) can independently modulate bistable perception, since they can shape perceptual confidence and the frequency with which the inferred state of the world alternates. is the sensory precision over the probabilities of the likelihood distribution A in the generative model, where (hidden) states map onto observations.Thus, sensory precision expresses the confidence with which the model can infer a cause from observations.Practically, high precision (e.g., 16  ) ensures the model can be confident that a particular outcome will be generated reliably by the latent state.Conversely, low precision (e.g.,< 0.2) implies an ambiguous relationship between causes and outcomes-and observations do little to resolve uncertainty about their causes.The probabilistic mapping from the current state to the next +1 is denoted by the state transition matrix B .The term  encodes the precision of the state transition matrix and it expresses the confidence with which the model can predict the present from the past and vice versa.Precision over beliefs about policies is encoded by  , which corresponds to the models' ability to confidently select the next action. We hypothesized that the increase of all three precision terms would lead to a decreased perceptual transition frequency.Furthermore, we hoped to address how to distinguish the influence of each precision term (i.e., neuromodulators) on bistable perception via frequency of eye movements, and acuity (measured using post-switch perceptual confidence).And, finally, via the modulatory effects on neuronal responses encoding distinct percepts of the Necker cube. Precision and neuromodulatory systems These precision parameters have previously been associated with specific neuromodulatory systems (Parr et al., 2018;Parr & Friston, 2017;Sajid, Friston, et al., 2020) see (Table 1).Briefly, sensory precision ( ), state transition precision (  ) and policy precision (  ) can be read as cholinergic, noradrenergic, and dopaminergic neurotransmission, respectively.Some empirical studies suggest a link between the cholinergic release and (the frequency of) perceptual transition.For example, (Sheynin et al., 2020) demonstrate that enhanced potentiation of acetylcholine (ACh) transmission attenuates perceptual suppression during binocular rivalry.Similarly, increased noradrenergic release has also been associated with an altered frequency of perceptual fluctuations (Eienhauser et al., 2008;Pfeffer et al., 2018).(Pfeffer et al., 2018) demonstrate that high catecholamine levels altered the temporal structure of intrinsic variability of population activity and increased the frequency of perceptual alternations induced by ambiguous visual stimulus.Finally, dopaminergic release has also been associated with faster perceptual transition frequency (Schmack et al., 2013). Simulations of Necker cube paradigm In the remaining sections, we model bistable perception, and the intricate precision balance that undergirds it, using simulations of the Necker cube paradigm e.g., (Choi et al., 2020). A generative model of the Necker cube Our generative model of the Necker cube paradigm has two hidden states: fixation point and orientation, and two outcome modalities: where and feature (Figure 1).The hidden state fixation point has three levels representing bottom-left, top-right, initial position fixation locations, and the orientation state has two levels representing left and right orientation.These fixation point locations are motivated by (Choi et al., 2020), who observed eye movements between these particular fixation points during perceptual switches.The outcome where reports the location of the eye-fixation: initial, top-right or bottom left.The outcome feature reports the corner of the cube being observed: Corner 1 (C1), Corner 2 (C2) or neither (labelled as null). The likelihood function maps states to outcomes (i.e., state-outcome pairs).Here, the feature likelihood is dependent on both fixation point and orientation factors.For the generative process (i.e., the process we used to generate the observations during simulation), the where likelihood depends only on the fixation point factor.Therefore, it generates outcomes independently of the orientation state.Conversely, the generative model's the where likelihood depends on both fixation point and orientation factors and explicitly maps each fixation point to a specific orientation (see Figure 1).Thus, the bottomleft (top-right) fixation location is only plausible under left (right) orientation.Next, we equipped the model with control states (i.e., states whose transition depend on actions) over eye movements via the fixation point factor.Thus, it can control whether to fixate over the top-right, bottom-left, or initial fixation point.The orientation transition is not controllable and the mapping between current and future states was expressed such that the left (right) orientation always transitions to the left (right) orientation.(Figure 1).Furthermore, the agent was equipped with strong preferences (measured in nats, i.e., natural logarithm) for avoiding the initial where location (-20 nats).This was to encourage the agent to sample bottom-left and top-right locationsas the eye movements between these locations have been shown to be associated with perceptual transitions in the Necker cube paradigm (Choi et al., 2020).At each timepoint, the agent could choose from 3 different actions (i.e., 1-step policy) of either fixating at the initial, bottom-left or top-right location.The prior beliefs about the initial states were initialised to 0.5 for the left and right orientations, 1 for the initial fixation point and zero otherwise. Precision and perceptual alternation The Necker cube generative model was used to demonstrate the computational mechanisms that underwrite bistable perceptual.For this, we simulated 729 models with different combinations of the three precision parameters: sensory precision ( ), state transition precision ( ) and policy precision (  ).The precision values used are specified in Table 2.  is the sensory precision associated with the likelihood distribution A i.e., which (hidden) states gave rise to particular observations (where = −8 is a small number that prevents numerical overflow) Where i represents the outcomes and j and k represent the orientation (either left or right) and fixation point (either bottom-left or top-right) factors, respectively.Note, we have excluded the initial fixation point for clarity, as its likelihood matrix is uninformative in the generative model.The two factors are unequal either in combination of the bottom-left fixation point and the right orientation or the contrary (see Figure 2).The bold A represents the likelihood matrix of how the data are generated (i.e., precise mappings from states to where and feature outcomes)5 .Here, the precision parameter ζ modulates only the columns for the preferred orientation under a given fixation point (i.e., bottom-left fixation point (labelled as 1) maps to the right orientation (labelled as 2) and vice versa), while the unpreferred orientation is parameterised as a uniform distribution.Adjusting the columns of the likelihood matrix in this way can be regarded as manipulating the relative sensitivity of neuronal populationsencoding the probability of each possible (hidden) state to sensory afferentsduring model inversion or perceptual inference. Figure 2A provides a graphical illustration of how the precision parameter values modulate the feature likelihood.Here, the sensory precision parameter (  ) modulates the mapping from orientation states to feature outcomes as a function of location states.Under this parameterisation, a high sensory precision ≥ 0.5 (right panels in Figure 2A), leads to a precise likelihood mapping for the state pairs bottom left locationright orientation and top right locationleft orientation.Thus, the agent would attribute C1 to the right orientation under the bottom-left position, and C2 to the left orientation under the top-right position.Conversely, under a low sensory precision, the likelihood mapping from an orientation and location to feature outcomes becomes imprecise (left panels in Figure 2A).With this mapping, the agent could not disambiguate between the causes of C1 and C2 outcomes via the perceived orientation regardless of the sampled fixation position.We motivate our choices for these likelihood mappings based on the degree of visibility of the features, assuming that the cube is opaque.Under this assumption, one should not be able to see Corner-1 for a left-oriented cube.Similarly, one should not be able to see Corner-2 for a right-oriented cube (see Figure 1 for left and right orientations).These assumptions are translated as likelihood mappings over the feature outcomes for the aforementioned orientation and fixation point combinations, whose precision is encoded by  . The probabilistic mapping from the current state to the next +1 is denoted by the state transition matrix B .The term  encodes the precision of the state transition matrix in the same fashion as the term  : Where the bold B represents the transition of how the hidden states change and give rise to new observations, which is set to be completely precise in the generative process.The B matrix expresses the confidence with which the model can predict the present from the past and the future, and vice versa. Figure 2B provides a graphic illustration of how precision changes the orientation state transition matrix.An increase in the precision of orientation state transitions ( ) leads to a precise mapping between the orientation at the current and next timepoints (right panel of Figure 2B).With a precise transition matrix, the agent would expect the orientation remain the same over time.Conversely, under a low precision, the agent would expect the orientation to change frequently (left panel of Figure 2B). he modulation of the γ parameter is omitted from this figure, as the best understanding of how this precision parameter influences bistable perception is provided in Equation 3.This parameter modulates the confidence over eye movement selection.If low, this precision prompts the agent switch between the two locations with greater randomness. Table 2. Precision (hyper-) parameters used to simulate bistable perception. ) to high precision values (e.g., > 0.5).Values above 0.5 look visually the same as the value of 0.5 and therefore are excluded from this representation. Perceptual switch definition Next, we quantified what constituted a perceptual switch.This is necessary for quantifying the number of perceptual transitions given particular precision combinations.Here, a switch is counted when a particular orientation (e.g., left) has a high posterior probability ( 0.5  ) at the current time point (t ) but a low posterior probability ( 0.5  ) at the previous time point ( In Equation 7, the bold s variables are the probabilities that parameterise our approximate posterior Q(s).Intuitively, this means that a switch is defined as a change from a belief that the left (or right) orientation is most likely to a belief that the right (or left) orientation is most likely. Face-validation Here, we present a numerical simulation that establishes the face validity of the Necker Cube generative model.For this, we simulated the model with arbitrary precision values; specifically, = 0. = and = 0. (Figure 3).We observed alternating inferences over the orientation as the trial progressed.This was induced by shifts in eye movements that sampled different corners of the Necker cube.Under our definition, a perceptual switch is observed at time point 7, when both conditions outlined above are met (first row of Figure 3).Conversely, perceptual switch would not be counted at timestep 2 because the posterior probability over the appropriate orientation at the previous timestep 1 is not 0.5  but exactly 0.5.Furthermore, this switch is usually accompanied via an actionsee middle panel of Figure 3. Figure 3.An example trial with 32 time-steps.The first row represents the posterior probability for the hidden state orientation.The second row shows which actions, i.e., eye movements, have been selected (cyan dots) and the posterior probability of each policy.This has only 31 time-steps as actions are modelled for the next step.The last row depicts the sampled outcomes over time with cyan dots and the preferences over outcomes with different shades in the background.Here, the light and dark shades illustrate that the agent has a strong aversion for the Null outcome observed only at the initial fixation (IF) point but has a relatively higher preference for the C1 and C2 outcomes observed at the bottom left and top right locations, respectively.A perceptual switch is highlighted using the red boxes, where the red arrow in the second row shows that the switch is (mostly) accompanied with an action towards the preferred fixation point.The red box in the last row shows that observing the outcome C1 facilitated the perceptual switch from the left to the right orientation in this instance, as shown in the first row.The example simulation is for the following precision combination: = 0. = and = 0. . Simulating perceptual switches Using the criteria in Equation 7, we measured the number of perceptual switches under different precision combinations (Table 2).Each precision combination was simulated 64 times, using random seed initialisation, with a trial length of 32 epochs.Figure 4 presents the average number of switches under each precision combination.On average, an increase in precision (regardless of the corresponding model parameter) decreases the number of perceptual transitions independently of other precision terms.For example, as increased from 0.001 to 10 we observed a decrease in the number of perceptual transitions.This is unsurprising given our observation regarding Figure 2, i.e., beliefs across time are propagated more confidently for high  values.Thus, the orientation does not change frequently during the trial and reduces perceptual switches.For ζ, the increased precision gives higher confidence about what is being perceived, thus removing the uncertainty minimising behaviour that would lead to sampling the other fixation point, which could increase the chances of a perceptual switch.It is worth noting, however, that for specific combinations of high ζ and low ω values there is an increase in the number of switches.We investigate this in the next section (Figure 4; upper left and middle figures).Lastly, decreased precision over policy selection (  ) increases perceptual switches.This is because low  values make all policies more likely, leading to a higher frequency of eye movements, and eventual perceptual switch. The observations above all rest upon a relatively simple insight.For non-zero precision parameters, the best action is always to continue to fixate the same location.This is because the observation associated with our current fixation location supports a belief in a specific orientation (e.g., right orientation if looking at lower left).Under this belief, the alternative location (e.g., upper right) is uninformative as, if the cube were (partially) opaque, there would be little useful visual information there with the opposing corner obscured by the near surface of the cube.In other words, if I am looking to the lower left and infer that the cube is in the right orientation, I would expect that the corner in the upper right will not be visible, so have no reason to look there.As such, the expected free energy will always be lower for the current location compared to alternatives.The result is low switching frequency, with switches that occur only when the action is sampled from the relatively improbable action of moving our eyes.However, the relative improbability of this action is modulated by the precision parameters. Increases in uncertainty about the orientation (via decreases in the sensory 6 or transition precision) attenuate the differences between the expected free energy of each action, resulting in more uncertainty in action selection and increasing the number of switches.Decreases in the policy precision attenuate the influence of the expected free energy on action selection, thus making the improbable action relatively less improbable and increasing switching rate.In short, changes in switching rates occur when greater uncertainty favours more stochastic deviations from an optimal policy of maintaining fixation. Figure 4.The average number of switches for different precision combinations.We plot the average number of switches across 32 trialseach comprising of 32 time-steps.Each heatmap is associated 6 The situation is slightly more complicated for the sensory precision parameter, as this has a dual role.The first is in determining the confidence in the orientation (as inferred from potentially unreliable sensory data).The second is in determining the ambiguity (which contributes to the expected free energy) of each location. Dissociating individual precision manipulations To dissociate the individual influences of each precision during bistable perception, we investigated the (average) posterior probability of the cube's orientation after the switch occurredalongside average switch rates.Here, the posterior probability denotes the or ℎ value used to identify a switch (Equation 7).The differences across each precision were evaluated by considering each individually and taking its (marginal) average across all possible combinations (Figure 5; Table 3).These differences revealed that posterior switch probability and average switch rate for both ζ and ω followed a non-linear relationshipas modelled with a polynomial expansion (Table 3).Conversely, we observed a negative linear association (Table 1; i.e., 1 st -order polynomial) for the posterior switch probability and average switch number for γ.This highlights that high posterior switch probability (i.e., values 0.5  ), that determines switch rate, can manifest in multiple wayssee supplementary text for further analysis.Furthermore, there is a degenerate (many to one) mapping between the switch posterior probability and number of switches across the different precision terms (Figure 5).This speaks to the multiple, but distinct routes through which perceptual transitions can arise. Discussion We investigated how precision manipulation could underwrite bistable perception.For this, we cast bistable perception, the phenomenon where perception alternates between distinct interpretations of a static stimulus, as an enactive process associated with specific eye movements that shift the focus from one visual feature to another leading to a perceptual transition (Choi et al., 2020).This ensues from a dissociation between the inferred percept and sensory observation (Brascamp et al., 2018) as distinct features of the visual stimulus are sampled.Computationally, we show that the frequency of switches between the two percepts depends on a modulation of (at least) three precision terms that determine the confidence of posterior beliefs.Here, we illustrated that there are distinct ways in which precision (hyper-) parametersassociated with neuromodulatorscan interact to affect bistable perception and how their influences can be dissociated from each other using post-hoc analysis of posterior beliefs.Below, we relate distinct precision terms to neuromodulators based on the previous literature review (Parr & Friston, 2017). Precision manipulation and neuromodulation Sensory precision is thought to be modulated via acetylcholine in the active inference framework (Parr & Friston, 2017) and in normalization models (Schmitz & Duncan, 2018).The influence of this neuromodulator on bistable perception has been studied in (Pfeffer et al., 2018;Sheynin et al., 2020) with apparently inconsistent results of either no influence on the switching rate or decreasing it, respectively.Based on our analysis, we found that sensory precision ζ depends on other precisions when it comes to the switching frequency (Figure 4) and so looking at the switching rate alone seems insufficient to dissect the specific contribution of this neuromodulator.For this reason, we fitted our simulated data to polynomial expansions to disentangle contribution of individual precision terms.From this analysis, we see that the increase of sensory precision should accentuate the acuity of perceived orientation-assumed to be equivalent to the post-switch perceptual confidence-which is consistent with (Sheynin et al., 2020). The ω precision has previously been associated with noradrenergic release (Parr & Friston, 2017).A study by (Pfeffer et al., 2018) used a noradrenaline reuptake inhibitor to study this.They found that after administering a drug boosting noradrenaline, participants reported a faster switching rate of a bistable stimulus.As stated above, it is difficult to dissect a specific contribution of neuromodulators (considering them as precision modulators) in bistable perception tasks given only the measure of switching rate.Moreover, bistable perception shows a close link to pupil dilation (Einhauser et al., 2008;Hupé et al., 2009;Kloosterman et al., 2015) which is linked to noradrenergic release (Larsen & Waters, 2018), and so future work could target pupil dilation in addition to the eye movements in our current model. We also showed that high policy precision  decreases the frequency with which bistable perception alternates.This precision parameter is suggested to be related to dopamine (Parr & Friston, 2017) , but few studies have looked at the role of dopamine and bistable perception or binocular rivalry.Nevertheless, a study by (Schmack et al., 2013) showed that there is an observable alternation of perceptual switches associated with DRD4 gene carriers but this effect was found only for a specific allele (DRD4-2R) but not for others (DRD4-4R and DRD7R).Moreover, (Kondo et al., 2012) found no effect of dopaminergic genes on the rate of visual perceptual switches.However, for auditory bistable perception, presence of prominent alleles for synthesizing this neurotransmitter decreased the number of switches.It is also not fully understood how these specific genes affect the dopaminergic neurocircuitry, thus a specific conclusion on whether and how dopamine targets bistable perception is still open. Neuroanatomy The deployment of the precision terms studied here can be associated with feature-based attention (FBA), as the perceptual switches here are understood as switches of orientation.This view is also corroborated with a similarity of brain regions involved in processing bistable perception and FBA, as both activate regions such as frontal eye field, intraparietal cortex, temporoparietal junction, and inferior frontal junction (Brascamp et al., 2018;Loued-Khenissi & Preuschoff, 2020;Zhang et al., 2018).Interestingly, all the neuromodulators suggested to be related to distinct precision terms used here are involved in attentional processing (Thiele & Bellgrove, 2018).It is possible that the FBA network deploys attentional mechanisms partially by regulating distinct neuromodulators that lead to distinct neurobiological changes but to overlapping behaviours.This relates to the previously reported top-down modulation of bistable perception via the fronto-parietal network (De Graaf et al., 2011). Limitations and future directions A key limitation of our work is that we included a limited amount of possible fixation points which makes the study of eye movements over-simplified.Including more fixation pointsand thus actions could provide a more applicable model for empirical studies and fitting of real data.Next, we prespecified the initial probabilities and precision values instead of updating them during each trial.Future work should explore online selection of these precisions and how they influence bistable perception based on a task.This is related to mental actions i.e., internal process that change posterior beliefs by regulating precision (Limanowski & Friston, 2018;Metzinger, 2017) and can be added via a hierarchical model in which slower parts of the model modulate the precision terms that influence faster dynamics (Hesp et al., 2021).Understanding how the precision parameters are learned, we could also examine the dynamics of neuromodulators, as so far, we have studied these effects in a stationary environment. Conclusion We have shown how bistable switches can be manipulated via three distinct precision terms.Moreover, we disentangled their influences, using changes in posterior beliefs to identify perceptual switches.The remaining question concerns the plausible implementation of these precision terms in the brain, which is currently suggested to be related to cholinergic, noradrenergic and dopaminergic neurocircuitries to state transition, likelihood, and policy selection precision terms, respectively.Overall, our results speak to a degenerate functional architecture that supports the switching rate of bistable perception (Noppeney et al., 2004;Price & Friston, 2002;Sajid, Parr, Hope, et al., 2020) i.e., multiple neuromodulatory systems can modulate the perceptual switching rate. Figure 1 . Figure 1.A graphical representation of the Necker Cube model.The figure provides a graphical illustration of the generative model with two hidden state factors and two outcome modalities.The first hidden state factor, the fixation point, has three levels: bottom-left, top-right, and initial fixation.The second hidden state factor, orientation, has two levels, right and left orientation.The outcome modality features has three levels: 'Corner 1' (C1), 'Corner 2' (C2), and Null.Here, C1 and C2 denote the two opposite corners and their surrounding areas, and the Null outcome is only plausible under the initial fixation point at the first-time step.There is an identity mapping from fixation point hidden state factor to where outcomes.The likelihood function of the generative model i.e., the probability of an outcome given a hidden state, is encoded such that i) the bottom-left fixation point is more informative about the right orientation as the agent perceives the related C1 corner, ii) the top-right fixation point is more informative about left orientation as the agent perceives the related C2 corner there, and iii) the initial fixation mapped onto a null outcome (i.e., neither C1 nor C2).The fixation point transitions (i.e., representing the state transitions across time) are completely precise.This encodes the eye movements between different fixation locations.Conversely, orientation transitions for the generative process are non-controllable and transition to the same orientation over time.Here, = −8 is a small number that prevents numerical overflow. Figure 2 . Figure 2. A graphical illustration of how different precision values change the likelihood and priors of the generative model.A).A modulation of the likelihood matrix via the sensory precision ( ).Each row is for a different fixation point with bottom-left on the first and top-right on the second, where the x-axis represents the orientation states and the y-axis the feature outcomes.B) This panel shows how the state transition precision ( ) perturbations influence the categorical probability distribution of the orientation transition.The x-axis represents the orientation states at the current timepoint (t ) and the yaxis the orientation states at the next time point ( 1 t + ).Here, low  values lead to a flat distribution which limits the capacity to project current beliefs about orientation states to past and future epochs whereas with high  the state transition matrix becomes more precise and the capacity to pass messages between epochs increases.For all plots, the scale goes from white (low probability) to black (high probability), and grey indicates gradations in-between.The key difference to note is how the probability distribution shifts from imprecise to precise mappings as we move from low precision values (e.g., values.The x-axis is associated with  and y-axis plots the different  value.The average switch count ranges from 0 (dark blue) to 15 (yellow). Figure 5 Figure 5 Dissociating individual precision terms.For A and B, each data point represents the average switch posterior probability (A; y-axis) and the number of switches (B; y-axis) across different precision values (x-axis).The curves represent the fitted polynomials for each precision value: ζ (blue), ω (green) and γ (cyan).C) the joint-plot of the association between number of switches and posterior switch probability.The x-axis presents the posterior switch probability, y-axis the number of switches.Here, each plot presents a different precision term. Table 1 . Overview of precision parameters, and how they may affect bistable perception. Table 3 . Fitted polynomial coefficients across different precision values for posterior switch probability (A) and average switch number (B).The relationship between precision and perceptual switching was modelled with the polynomial expansion:
2022-12-20T06:42:44.852Z
2022-12-19T00:00:00.000
{ "year": 2023, "sha1": "0ffa07d81377c2392cb3ceef01877de10b40e18a", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/cercor/advance-article-pdf/doi/10.1093/cercor/bhad401/53177293/bhad401.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "0ffa07d81377c2392cb3ceef01877de10b40e18a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1075622
pes2o/s2orc
v3-fos-license
A Functional Variant of NEDD4L Is Associated with Obesity and Related Phenotypes in a Han Population of Southern China NEDD4L is a candidate gene for hypertension, both functionally and genetically. Recently, studies showed evidence for the association of NEDD4L with obesity, a key intermediate phenotype in hypertension. To further investigate the relationship between NEDD4L and body mass-related phenotypes, we genotyped three common variants (rs2288774, rs3865418 and rs4149601) in a population-based study of 892 unrelated Han Cantonese using the Sequenom MALDI-TOF-MS platform. Allele frequencies and genotype distribution were calculated in lean controls and overweight/obese cases and analyzed for association by the Chi-squared test and Logistic regression. Linear regression analysis was used to analyze the effect of individual genotypes on quantitative traits. Multivariate analyses demonstrated that the minor allele of rs4149601(A = 20.9%) was associated with a 2.60 kg, 2.78 cm and 0.97 kg/m2 decrease per allele copy in weight, waist and BMI, respectively. Carriers of this allele also had a significant lower risk of overweight/obesity (p < 0.0001, OR = 0.52, 95% CI: 0.37–0.74) as compared to non-carriers. However, no significant association between genotypes at rs2288774 and rs3865418 and covariate-adjusted overweight/obesity or any related phenotypes was observed. These results suggested that the functional variant of NEDD4L, rs4149601, may be associated with obesity and related phenotypes, and further genetic and functional studies are required to understand its role in the manifestation of obesity. rs2288774 and rs3865418 and covariate-adjusted overweight/obesity or any related phenotypes was observed. These results suggested that the functional variant of NEDD4L, rs4149601, may be associated with obesity and related phenotypes, and further genetic and functional studies are required to understand its role in the manifestation of obesity. Introduction Numerous linkage studies have indicated NEDD4L as a candidate gene for essential hypertension among different populations [1][2][3][4]. Although the precise pathways and biological mechanisms underlying these associations have yet to be established, according to recent research [5,6], two potential separate mechanisms are worthy of consideration: first, acting directly via the epithelial Na + channel (ENaC)-NEDD4L-proteasome system [7], NEDD4L is the key link of this system [8], which plays an important role in the regulation of blood pressure (BP) [9]; second, acting indirectly via key intermediate phenotypes of BP, such as obesity [10] and salt sensitivity [6], both of which are major risk factors for the development of hypertension [11,12]. Given the complex etiology and pathophysiology of hypertension, dissecting the intermediate phenotype of BP and unraveling intermediate mechanisms has been suggested to be more comprehensible [13,14]. Recently, a common polymorphism located at exon 1 (rs4149601) of the NEDD4L gene was shown to be associated with obesity in Kazakh [10] and to not be associated in the same group for another common polymorphism located at intron 12 (rs3865418) [15]. No other studies and replications are available on the issue of whether NEDD4L genetic variation is a contributing factor to the risk of obesity. These common variants may also become involved in obesity-related phenotypes. Particularly, BMI is a highly heritable phenotype, but robust associations of genetic polymorphisms to BMI or other obesity-related phenotypes have been difficult to establish. However, the effect of the NEDD4L genetic variation on obesity-related phenotypes remains unclear, and there was no association analysis of common variants in NEDD4L with BMI in non-Asian populations. Our study, therefore, focused on the association of genetic variation in NEDD4L with obesity and 11 related phenotypes in a genetically isolated Han Chinese population, which we have previously used successfully to identify single-nucleotide polymorphisms (SNPs) associated with obesity-related phenotypes. In this study, three common variants (rs4149601, rs3865418 and rs2288774) were selected from dbSNP and from NCBI, based on their position in the gene, minor allele frequency and previous studies. Results and Discussion A test for Hardy-Weinberg Equilibrium (HWE) suggested that the genotypes for all the SNPs were in Hardy-Weinberg proportions, and there was no deviation from HWE among normal weight subjects (p = 0.79, 0.71 and 0.49 for rs2288774, rs3865418 and rs4149601, respectively). The frequencies of minor alleles in the whole study group were 20.9% for rs4149601, 35.3% for rs3865418 and 38.9% for rs2288774, respectively. It is in good agreement with the Han Chinese in Beijing (20.0%, 35.6% and 32.2%, respectively). However, blacks and Caucasians have quite different frequencies (Table 1). Linkage disequilibrium (LD) analysis using Haploview revealed that the three SNPs spanned three different LD blocks (D' = 0.09-0.72, r 2 = 0.001-0.44). As the SNPs can't substitute each other, we decided to describe in "Results" the findings for all three SNPs. Association Analysis of Genetic Variants in NEDD4L with Overweight/Obesity Allele frequencies at rs4149601 showed significant difference between healthy lean controls and overweight/obese cases, with the p-value being 0.0002. The difference remained statically significant even after correction for multiple testing by Bonferroni correction (×3, p-value 0.0006) ( Table 2). Correspondingly, rs4149601 was significantly associated with overweight/obesity (p < 0.0001, OR = 0.51, 95% CI: 0.37-0.71). Significant values were also obtained in the additive model analysis after adjustment for age, sex, smoking, hypertensive status, alcohol consumption and exercise habit (p < 0.0001, OR = 0.52, 95% CI: 0.37-0.74). None of the other two selected SNPs showed any significant association with overweight/obesity. Results of association analysis are summarized in Table 3. Analysis of Obesity Related Phenotypes and Genotypes at Genetic Variants in NEDD4L Tables 4, S1, and S2 present the relationship of three selected SNPs with 11 obesity-related phenotypes. One of the three SNPs, rs4149601, had a significant association with weight, waist and BMI (p = 0.0003, 0.0015 and 0.0012, respectively) with an allele-dose effect. Carriers of the minor allele had, on average, 2.60 kg decreased weight, 2.78 cm decreased waist and 0.97 kg/m 2 decreased BMI per allele copy, respectively. For the other explored obesity-related quantitative traits, such as BP, log serum triglycerides (TG), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c), high-density lipoprotein cholesterol (HDL-c) and glucose levels and the anthropometric variables of height, we observed no evidence for an association with the rs4149601 genotype (Table 4). There were no associations between any of the obesity-related phenotypes and the rs2288774 polymorphism (Table S1) and the rs3865418 polymorphism (Table S2). NEDD4L is a member of the homologous to the E6AP carboxyl terminus (HECT) class of E3 ubiquitin ligases, which plays an important role in the development of lipid disorders [16], central obesity [17] and hypertension [18]. NEDD4L is, thus, an attractive candidate gene for susceptibility to hypertension and intermediate blood pressure phenotypes (e.g., obesity [10] and salt sensitivity [6]). The present study is the first population-based study reporting an association between NEDD4L genetic variation and obesity or related phenotypes in an independent Han Chinese population. In this study, we first used a case-control approach for analyzing three common SNPs in NEDD4L for association with overweight/obesity. Furthermore, we also investigated 11 obesity-related quantitative traits for their association with selected SNPs. We provide evidence that genetic variation of NEDD4L may be implicated in the prevalence of overweight/obesity and related phenotypes in Chinese Hans. Recently published genetic studies in the Kazakh general population corroborate our findings that no significant associations were found between the rs2288774 and rs3865418 polymorphisms and any of the obesity-related phenotypes as quantitative traits [1]; nor was there any association between rs3865418 genotype and obesity when considered as an affection status [15]. However, exploratory evidence for an association of rs4149601 with the decreased risk of being overweight/obese was detected in both the Kazakh and Han Chinese population. As revealed by the odds ratio, carriers of the minor allele (A) of rs4149601 tend to be associated with a decreased susceptibility for being overweight or obese ([OR = 0.52, 95% CI: 0.37-0.74] in our subjects and [OR = 0.67, 95% CI: 0.50-0.91] in the Kazakh population [10]) when compared with subjects who do not carry this allele. Because no functional studies are available to understand the underlying biological mechanisms of the observed genetic associations, we cannot conclude from our data whether the decreased risk of overweight/obesity that we observed in carriers of the minor allele of rs4149601 is associated with increased or decreased NEDD4L activity. However, based on the known functions of the NEDD4L gene, one can speculate that the effects of E3-ubiquitin ligase activity on lean mass, as well as fat mass may explain more or less the relation with obesity that we observed [19]. In line with these findings, we also observed that the minor allele (A) of rs4149601 is associated with a decreased weight, waist and BMI. Further studies are needed to investigate the relation between NEDD4L genetic variants and human body composition traits and muscle strength. Yet, the strong association with the waist that we found makes a predominant effect on lean mass unlikely, since obesity is associated more strongly with excess adipose tissue than with excess muscle mass [20]. The G→A variant of rs4149601 is a common variant of NEDD4L, which encodes a new protein (novel C2-domain) that lacks a full length C2-domain [21,22]. It has been speculated that NEDD4L, lacking the functionally crucial C2-domain, downregulates ENaC more potently than the protein variants with the intact C2-domain [23]. Consequently, many researchers estimated that the A allele would decrease BP by downregulating Na + re-absorption [6,24]. However, unlike NEDD4, one functional study showed that NEDD4L, both with and without the C2-domain, can strongly reduce ENaC activity [25]. Furthermore, in this case, ubiquitination of the epithelial sodium channel could be reduced, due to the novel C2-domain, and the activity of the epithelial sodium channel-Na transport and blood pressure are expected to be increased. Correspondingly, controversial results had been observed in the relationship between SNP rs4149601 and essential hypertension [3,24]. Notably, recent functional studies identified a great extension to the knowledge surrounding NEDD4L. First, in addition to targeting sodium channels, NEDD4L has also been shown to negatively regulate TGF-β signaling by targeting Smad2 and Smad3 for degradation [26]. Second, dysregulation of TGF-β signaling has been implicated in diseases, such as type 2 diabetes, obesity and cancer [27]. Our results may thus build on a small, but growing body of literature suggesting that the functional variant of rs4149601 in NEDD4L may also participate in the regulation of metabolism and be associated with the development of obesity and related phenotypes. Obesity has become a worldwide epidemic and represents a major risk factor for type 2 diabetes, hypertension, cardiovascular disease and stroke. Few, if any, effective options for treatment and prevention are available. The findings of our study are in line with the initial hypothesis that modulators of NEDD4L activity may be associated with obesity and related phenotypes and may thus provide a valuable new strategy for treatment and prevention of obesity and its related diseases. Another attractive aspect of our findings is that the SNP of rs4149601 is a functional variant within the NEDD4L gene. However, we have little information on the functional effects of the genetic variants of NEDD4L on obesity and related phenotypes; we cannot conclude from our data whether the lower weight, waist and BMI that we observed in carriers of the minor allele of the rs4149601 is associated with increased or decreased NEDD4L protein activity. Thus, further studies into underlying mechanisms and body composition are also needed, as well as prospective studies on the relation between NEDD4L variants and obesity or related phenotypes. Beyond these findings, our study also has certain limitations. First of all, analyses were performed in a moderately sized sample, which is underpowered to detect moderate or small effects, underlining the necessity to conduct larger studies. In the present study, for the sample of 256 cases and 636 controls, the power estimates were larger than 90% to detect a log-additive genotype relative risk of 0.65. For the quantitative analyses in the sample of 892 subjects, the power estimate was larger than 85% to detect an additive effect of 0.24 in units of SD of a standard normal distribution (standardized effect size). Thus, all samples were well powered to detect strong effect sizes of disease predisposing variants; moderate or smaller effects might have been missed. However, additional analyses with larger samples are necessary. Indeed, while this manuscript was in preparation, as part of the Genetic Research in Isolated Han Chinese Populations Program in Southern China, similar studies are being carried out in another Han Chinese population. We expect that the findings from the present study will be replicated. Second, note that our control and case group are both older than 50 years, which might be able to reduce the chances of misclassification errors as opposed to the use of children and adolescents, because younger men might become overweight or obese later on in life. However, it is reasoned that older individuals have longer exposure to putative environmental and genetic influences. Thus, further studies are needed to replicate our findings among young subjects. Third, plasma leptin [28] and insulin [29] levels are important obesity-related phenotypes, which may contribute to the understanding of the physiopathology of the associations among genetic variants and obesity. These data will be collected in the next phase of our study. Finally, in this study, we focused on the relationship between three common variants and obesity or related phenotypes. It is necessary to clarify whether other variants of NEDD4L have any effects on obesity and related phenotypes in the future. Ethics Statement The studies were approved by the Ethical Committees of both the Canton Center for Disease Control and Prevention (Canton CDC) and South Medical University and adhered to the principles of the Declaration of Helsinki. Written informed consent was obtained from each individual enrolled before entry into the study, and all of the procedures were in accordance with institutional guidelines. Subjects All of the DNA samples and clinical data for participants in this study were collected from the Genetic Research in Isolated Han Chinese Populations Program in Southern China. The aim of this program was to identify genetic risk factors in the development of chronic disabling disease. For this program, the participants were restricted to local permanent residents of Han Chinese with no mixed marriages within the past three generations, ensuring a completely homogeneous ethnic background. In the present study, we focused on 892 participants, randomly recruited from screening during the ten-month period from March to December 2011 in seven districts of Canton, for whom complete phenotypic, genotypic and genealogical information was available. Measurements Weight and height were measured by the same two investigators. Height was measured to the nearest 0.5 cm on a standardized height board. Weight was determined in light underwear and no shoes to the nearest 0.1 kg on a standard physician's beam scale. BMI was computed as weight in kilograms divided by height in meters squared (kg/m 2 ). Overweight and obesity were defined according to the criteria of WHO: normal weight, 18.5 kg/m 2 ≤ BMI < 25 kg/m 2 ; overweight, 25 kg/m 2 ≤ BMI ≤ 30 kg/m 2 ); and obesity, BMI >30 kg/m 2 . Waist circumference was measured to the nearest centimeter with a flexible steel tape measure while the subjects were in a standing position at the end of gentle expiration. All of the participants were asked to avoid alcohol, smoking, coffee, tea and exercise for 30 min before the BP measurements. A standardized mercury sphygmomanometer and appropriate cuff size (regular adult, large or thigh) were used by trained nurses to measure the BP in the subjects' right arm. The sitting BP was measured 3 times, 30 s apart after 5 min of rest, and the average was defined as the BP level. The average blood pressure of the last two measurements ≥140/90 mmHg or a self-report diagnosis of hypertension was defined as hypertensive. Blood specimens were drawn after overnight fasting and analyzed for triglycerides (TG), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c), high-density lipoprotein cholesterol (HDL-c) and glucose. The intra-and inter-assay coefficient variations of these variables were all less than 5%. SNP Selection and Genotyping The inclusion criteria of candidate SNPs selection are SNPs among the NEDD4L gene, and previous literature has reported the association with essential hypertension and its related phenotypes; SNPs data from the Han Chinese Population included in the HapMap were also used as referred to. Furthermore, SNPs that failed in the assay design were excluded. A total of 3 frequency-validated SNPs were selected for genotyping. The experimental procedures mentioned in this article have been previously described [30][31][32]. Briefly, three selected common variants (rs4149601, rs3865418 and rs2288774) were genotyped using the Sequenom MassARRAY matrix-assisted laser desorption ionization-time of flight mass spectrometry platform (Sequenom, San Diego, CA, USA) on genomic DNA isolated from peripheral leukocytes. The first step was to amplify the genomic sequence containing the SNP by a standard PCR protocol, which would produce amplicons 80-120 bp in length. Subsequently, the single-base extension (SBE) reaction was performed on the genomic amplification product using iPlex enzyme and mass-modified terminators. The products of the iPlex reaction were desalted and transferred onto a SpectroCHIP by the MassARRAY nanodispenser. The SpectroCHIP was then analyzed by the MassARRAY Analyzer Compact. All primers used are available on request. For the validity of genotypes, 78 individuals were genotyped in duplicate; concordance was 100%. Statistical Analyses Power calculations were performed using the software QUANTO Version 1.2.4 [33] (University of Southern California, Los Angeles, CA, USA; http://hydra.usc.edu/gxe) for all common variants, using an estimated minor allele frequency (MAF) of 0.2 and α = 0.05 (two-sided). A Hardy-Weinberg Equilibrium test was done for each SNP by Pearson's goodness-of-fit Chi-square test before further analysis, and a p-value <0.05 was considered to show significant deviation of the observed genotypes from the Hardy-Weinberg proportions. Linkage disequilibrium (LD) between SNPs was assessed by D' and r 2 values using Haploview version 4.2 [34]. A value of 0 for D' indicates that the examined loci are in fact independent of one another, while a value of 1 demonstrates complete dependency [35]. The odds ratio of being overweight and obese compared with normal weight was assessed using logistic regression. Obesity-related phenotypes of interest were examined for normality and log-transformed in the case of blood parameters (TG, TC, HDL-C, LDL-C and glucose) to achieve an approximately normal distribution. Analyses of all quantitative variables (waist circumference, weight, height, BMI, blood pressure and the log-transformed blood parameters) were performed using a linear regression with SNPStats [36]. The covariates, age, sex, smoking (graded current/former/never), hypertensive status, presence of cardiovascular medications, alcohol consumption (in units per week) and exercise habit (graded non-regular/1-or 2-times per week/three-or more times per week) were considered. Unless otherwise stated, all analyses were performed using a additive genetic model, and all reported p-values are nominal, two-sided and not adjusted for multiple testing. Correction for multiple testing was done by Bonferroni's inequality method wherever applicable and was defined as the p-value (single tests) × number of tests. Bonferroni correction was not done for multiple traits, due to the lack of any significant association. Statistical significance was established at a two-tailed value of p < 0.05. Conclusions In conclusion, we analyzed three common variants in NEDD4L, namely rs2288774, rs3865418 and rs4149601, for association with overweight/obesity and 11 related phenotypes. We found that carriers of the minor allele (A) of rs4149601 have lower weight, waist and BMI and a decreased risk of being overweight/obese. Together with results from the recent studies, these consistent findings warrant research into a potential role for NEDD4L modulators in the prevention and treatment of human obesity.
2014-10-01T00:00:00.000Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "19f7a11a904a1a4ca1026876c25e278390c05d86", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/14/4/7433/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19f7a11a904a1a4ca1026876c25e278390c05d86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
158432894
pes2o/s2orc
v3-fos-license
RENEWABLE ENERGY SOURCES AND THEIR IMPACT ON POLISH LABOR MARKET IN THE CONTEXT OF GLOBAL ENERGY PROBLEMS The aim of this article is to analyze potential directions of Renewable Energy Sector (RES) development and its impact on the labor market: assessing job-creating potential of renewable energy sector in Poland. The paper focuses primarily on electricity generation technologies like wind power stations and solid biomass. The comparison between employment level for RES sector and other sectors shows the scale of the impact of renewable energy development on the labor market in Poland and on the country’s economy overall. Our findings show that currently, the total number of jobs created because of the development of wind energy (11 500), solid biomass (18 800) and solar energy (2 750) exceeds employment in the coke industry (4 000), cement (6 000) and lignite mines (5 000). According to the author’s research, the results of the study indicate that the construction of, for example, wind power plants is not only an opportunity for local communities to create additional employment, but it also provides an opportunity to enrich the community with various types of taxes. Introduction Energetics is the key area of industry in most countries of the world in economic, social and political context and, therefore, fuel and energy complex is under the special supervision of the state while being quite strictly regulated.National security as a whole depends on this complex and its economic constituent elements.The increasing degree of internationalization and globalization of the energy sector and growing energy interdependence between individual countries, confirm the thesis about the countries' inability to ensure their energy security without solving problems of international energy security at regional and global levels with particular emphasis on renewable energy sources. Directive 2009/28/EC of the European Parliament and of the Council of 23 April 2009 on the promotion of the use of energy from renewable sources and amending and subsequently repealing Directives 2001/77/EC and 2003/30/EC which imposed on Poland the obligation to increase the share of renewable energy in the final gross energy consumption by the end of 2020.The Directive sets new conditions for the development of renewable energy production and provides a common framework for the promotion of renewable energy sources.At the same time, it establishes mandatory national general objectives in order to create a possibility to achieve 20% share of renewable energy in the gross end-use of energy throughout the EU in 2020.The goal for Poland is to achieve a 15% renewable energy (RES) share of total final energy consumption by 2020.All those actions have a huge impact on both Polish and European labor market in general. The purpose of the article is to analyze Polish labor market in the context of renewable energy sources development, analyzing the current state and future possibilities, as well as RES impact on regional development based on conducted study, with particular emphasis on data from Kisielice community -the only energy-independent community in Poland. Data and Methods The analysis in this article is based on the study of literature, the legal regulations and Eurostat's statistics as well as on author's own paper-and-pencil interviews conducted in 2015 among a) RES investors and b) municipalities in which wind power stations are located in Poland, as well as c) in municipalities with outstanding wind conditions but no wind farms established there yet; in order to collect detailed responses and a set of qualitative data. Results of the research National security depends on ensuring energy security in terms of diversifying not only sources of supply (using a wide range of energy sources) but also suppliers, routes and transport mechanisms.A country's energy system, based on a few large coal power plants is more susceptible to sabotage than a system based on a dozen scattered low-and medium-power sources.The problems of Polish and EU energy dependence and employment reduction in such industries as mining can be partly solved by developing strong renewable energy sector as well as building credible partnerships with suppliers, transit countries and buyers.International solutions are also needed to reduce global greenhouse gas emissions. The diversification of electricity sources in the EU countries by energy source is shown in figure 1.According to data presented in figure 1, Poland occupies the third place in terms of using traditional energy sources, such as coal, for electricity production.Poland's power industry has always been based on coal, that is why the largest power units were created near the coal and lignite mines.Considering the actual conditions and effects of renewable energy sector development, it is important to take into account, in accordance with the constitutional guiding principles of environmental protection, the principle of sustainable development, economic and social factors that determine the development of a given energy sector.At the same time, we cannot forget about the conditions resulting from the need to protect the environment, including natural and landscape values.Poland's energy industry is faced with the need to modernize and strengthen the National Electricity Grid.Worn-out coal-fired power stations need to be replaced with new production capacity.Some of them will still be based on coal, which will continue to be the main source of energy in the next few decades, according to "Poland's Energy Policy until 2030" (Ministerstwo Gospodarki, 2009). Development of the labor market The effects of renewable energy industry on the labor market can be observed on the scale of the whole country and the European Union in general.In the European Union, renewable energy sector in 2015 provided employment for 1 139 050 people, including 43 300 people in Poland, where many more people are employed in RES sector per unit of energy produced as opposed to the average in the UE (Table 1).The reason for that might be lower technological sophistication relatively to the leading European countries, for example, in case of Germany -regarding new photovoltaic or wind energy technologies (Graczyk, 2014), or Norway -its electricity generation is 97% renewable and the Norwegian government is planning on increasing sustainable energy use even more (Invest in Norway, 2017) It can therefore be assumed, that dissemination of renewable energy technologies will result not only in increased employment in absolute terms but also in decreasing employment per unit of production, which means increased productivity, and consequently a decrease in unit costs.The scale of the phenomenon above depends on the current advancement in the application of the technology.The data presented in table 2 indicates, that for example in the case of solid biomass technology and wind energy, in which Poland has a considerable scale of production and experience, employment per unit of production is already lower than the European average.This means that the Polish experience makes it possible to produce more electricity and heat with lower labor input, which makes Poland's RES competitive and attractive for foreign and domestic investments.Furthermore, calculations from table 2 and predicted data on the electricity production in Poland until 2030, presented in "Poland's energy policy until 2030" in attachment #2 (Ministry of Finance, 2009) obviously prove, that in 2015 Poland had already outperformed the forecasts.The worldwide renewable energy sector in 2016 employed 9.8 million people, directly and indirectly (with a 1.1% increase in 2016 over 2015).The most consistent increase has come from jobs in the solar PV and wind categories; it has more than doubled since 2012.In contrast, employment in solar heating and cooling and large hydropowers has declined.These employment trends can be attributed to several underlying factors.Falling costs and supportive policies in several countries, for instance, have spurred deployment of renewables at a record pace, and have resulted in job creation.However, these positive changes were moderated by lower investments, rising automation and policy changes, resulting in job losses in some major markets, including Brazil, Japan, Germany and France (International Renewable Agency, 2017) The shape of the EU climate and energy policy clearly indicates the need to further increase the share of RES in the national energy mix.However, the dynamics of change, the specific value of the national RES target for 2030 and the contribution of wind energy to its fulfillment still depend on future political decisions.Therefore, the potential impact of wind energy on the Polish labor market until 2030, based for example on the study "Impact of wind energy on the Polish labor market" (Bukowski, Śniegocki, 2015) was determined on the basis of a scenario analysis, where three development scenarios of the sector in Poland: central, low and high were analyzed.It was assumed that, during 2018-2030, investments in onshore wind farms will be as follows: 400 MW/year in central scenario, 200 MW/year in low scenario, and 600 MW/year in large scenario.It should be stressed that the re-acceleration of the development of wind energy sector is a prerequisite for the realization by Poland a binding target for the development of RES till 2020.Therefore, the realization of a low scenario means not only Poland's losing the development impetus for wind energy, but also a high risk of incurring the costs of failing to comply with the provisions of the EU climate change package. Polish RES sector reforms, introduced by the Renewable Energy Sources Act 2015 (the 'RES Act'), which came into force on 1 July 2016 marked a significant step forward, however, subsequent amendments to the RES Act have illustrated that the Polish government is in a difficult position of striking a balance between developing RES for energy diversification and rescuing its coal industry.It is estimated (Pacula, 2017) that around 80% of Polish coal mines (mainly concentrated in the south-west region of Silesia) are unprofitable, the sector employs around 104000 people, with another 208000 people on miners' pensions.Poland has Europe's largest hard coal reserves, thermal coal and lignite accounted for 84% of the country's electricity generation in 2015 (Easton, 2016).Despite governmental subsidies, Poland's coal mining industry debts are still huge (Wood & Broom, 2017). It is to be expected that despite the increased productivity of the industry, wind energy in Poland will generate more jobs per unit of energy than coal energy sector in subsequent decades, especially after employment restructuring in hard coal mining.According to author's survey, wind energy installations are usually locally oriented, in which case there is no need to build a centralized technical infrastructure.However, taking into account that RES creates jobs geographically more dispersed than conventional power stations, because it depends on the resources' location (González & Vélez, 2009), and the fact that it has higher rates of employment per MW installed than conventional energy (Rodríguez-Huertaa, Rosas-Casalsa, Alevgul and Sormanc 2017, p.557).It can be concluded that wind energy can successfully become a stimulating factor for economic development at the regional level. The use of wind energy at the local level brings both economic and social benefits.One of the most important economic benefits is creating a strong impulse for local development resulting from the increase of local entrepreneurship, and hence an increase in the number of jobs.Unfortunately, this fact is not always obvious in various communes. On the Polish labor market, the number of job offers related to wind energy is constantly growing.Specialists are being sought in the field of wind turbines construction, there is a need for designers, assemblers, operators, service and maintenance technicians, environmental managers as well as experts in business development related to wind energy and investment advisors.Although the generated jobs are related to various activities during investment cycle, the largest number of jobs are created during the construction and installation phases. The construction of wind farms in the communities may also constitute an additional source of income.The results of the study indicate that the construction of wind farms is not only an opportunity for the local community for additional employment, but it is also an opportunity to enrich the commune in the form of various types of taxes.In addition, the use of renewable energy is a strong support for communes and districts during their efforts to obtain external sources of financing from various types of EU funds as well as private investors for the implementation of investments in infrastructure owned by them. Due to the significant depreciation of existing infrastructure in public utilities, these investments will have to be carried out anyway.Therefore, the development of wind energy sector can bring significant savings in planned investments and additionally boost local budgets.Inflows to municipalities where wind turbines have been located, in areas with favorable wind conditions, can account for up to 17-22% of the municipal budget.Furthermore, according to the author's research (Wasiuta, 2014) -more than 96% of analyzed communities consider tax revenues to the municipal budget and job creation potential to be the biggest benefits of RES development for the municipalities.That is why many municipalities "off-bottom" are seeking to put wind farms on their premises, and local governments are waiting for potential investors with open arms.The construction of a wind farms is often not only the aforementioned budget revenues, but also often an improvement, at the expense of investors, of the road network -57% of analyzed communes consider this factor to be important.Not only the main roads and intersections, but also the construction of a network of roads in the fields between windmills, which farmers use willingly later on (Wasiuta, 2014) Comparing employment level for RES sector with other sectors shows the scale of the impact of renewable energy development on the labor market in Poland and on the country's economy overall.Currently, the total number of jobs (table 2) created because of the development of wind energy (11 500), solid biomass (18 800) and solar energy (2 750) exceeds employment in the coke industry (4 000), cement (6 000) and lignite mines (5 000).In 2030 wind energy might create more jobs than coal mining which, after the inevitable restructuring (according to Warsaw Institute of Economic Studies (Bukowski & Śniegocki, 2015)) will employ about 4 to 16 thousand people.In contrast to the mining industry, the long-term perspectives arise from factors, which are beyond national control (for example the situation on the global coal market, the ban on unprofitable mines in the EU and other).Moreover, as EU27 statistics show, coal reduced its share in the total primary energy supply from 22% in 1995 to 16% in 2009 (Markandya, Arto, González-Eguino, Romá, 2016, p. 1344) The development of wind energy sector will depend largely on the shape of the regulations concerning renewable energy auctions introduced in Poland.It is worth noticing, that jobs that are dependent on wind energy sector are not concentrated in large industrial plants, and therefore less visible than employment in traditional heavy industry and mining.It should also be taken into account that rising automation in extraction, overcapacity, industry consolidation, regional shifts, and the substitution of coal by natural gas in the power sector result in job losses in the fossil-fuel sector in some countries.Poland has two options in this sector -either to invest in the mining sector (for example in new technologies) to increase efficiency and reduce costs, in order to be competitive on local and international markets -which would lead to a reduction in the number of employees or to continuously subsidize the mining industry in order to artificially sustain the sector and its employment (Wasiuta, 2014, p.150).Moreover, climate policies and the rise of renewable energy usage may add pressure on the sector.In some power markets, the increased integration of variable renewable energy in the grid is already creating financial issues for incumbent fossil fuel based generators (IRENA, 2017). For example, employment in the coal industry worldwide is decreasing due to several factors such as power plants closing, overcapacity and improved mining technologies.China, for example, produces nearly half the world's coal, but excess supply and a slowing economy have led the government to plan of closing 5600 mines (Stanway, 2017) as well as cancelling plans to build more than 100 new coal-fired power plants (Forsythe, 2017) which can lead to the loss of 1.3 million coal mining jobs, which equals 20% of the total workforce in the Chinese coal sector (Yan, 2017).The Chinese government intends to spend more than $360 billion through 2020 on renewable power sources and to increase employment in this sector to 13 mln.people (Total Investment In Renewable Energy Will Reach 2.5 Trillion Yuan, 2017). The solar energy sector in Poland is one of the few exceptions with a rising statistics.According to the data presented in table 2, Polish solar industry employs 2 750 people and generates a turnover of 230 million euros. In the times of frequent protests organized by local community members against the construction of wind turbines it is worth looking at places where wind farms coexist with the residents.For example the Kisielice community (Gmina Kisielice) in Poland is an interesting illustration of such situation.The local authorities have found a way for a modern, ecological direction of change while ensuring a continuous flow of financial resources, also being the first and only energy self-sufficient community in Poland.Wind energy has been implemented there consequently since the late 1990s.The local community is happy, farmers are happy when their land is chosen for an investment, because they get a fair salary.In addition, the protection of the environment is a positive aspect for everyone while using RES sources.Projects aimed at using biomass and cogeneration for heating in the community have been implemented since 2003, led to the closure of coal-fired boiler houses, coal and oil heating systems in detached houses are being abolished successively.According to the author's research (Wasiuta, 2013) -80% of respondents consider it to be significant or moderately significant that the development of renewable energy will contribute to regional development in the forms of self-employment and increasing jobs in that region which contribute to the development of different economic sectors, the development of transport infrastructure. Conclusion Renewable energy sources sector creates diverse jobs in production, services and construction, requiring a variety of qualification and skills.Its development not only increases but also improves the quality of jobs in the industry.The slowdown in the development of second biggest Polish RES sector, which is wind energy sector resulted from regulatory uncertainty when working on a law on renewable energy sources has led to a reduction in the scale of the related employment by 3.5 thousand people (Bukowski & Śniegocki, 2015) in 2012-2014.Due to the unfavorable regulatory environment, this trend will probably continue over the next few years. Increase in employment requires a new impetus of investments.In the next decade, the dynamics of jobs created for example by wind energy sector, will be determined primarily by the size of expenditures for the construction of offshore wind farms. Dissemination of any renewable energy technology will result in an increase in employment in absolute terms, but the decrease in employment per unit of production.Employment in relation to installed capacity in Poland is higher than the average in the EU (table 1).There is considerable potential for growth in revenues from renewable energy production (for example from income tax) and increasing employment in this sector. Figure 1 Figure 1 EU breakdown of electricity production by source in 2016 Table 1 Energy production and employment in renewable energy sources (RES) sector in Poland and EU in 2016 Technology European Union Poland Energy production (ktoe) Employment (jobs) Number of employees per unit of production Energy production (ktoe) Employment (jobs) Number of employees per unit of producton Source: Author's own calculations based on EurObserv'ER, 2017b, EurObserv'ER, 2017c. Table 2 Employment in the sectors of solid biomass (SB), wind energy (WE) and solar energy (SE) in terms of primary energy production in select- ed EU countries (2015) Source: Author's own work based on: EurObserv'ER, 2017, EurObserv'ER, 2017a.
2019-05-20T13:06:59.261Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "527cfb63a4517c365da36eba86b2ddabd7214b7f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.15414/isd2018.aeu.17", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "527cfb63a4517c365da36eba86b2ddabd7214b7f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
270676484
pes2o/s2orc
v3-fos-license
HR management of enterprises under martial law, socio-cultural and technological challenges . The business activity should be adequately reflected in the processes of enterprise HR management.Therefore, the problems are changing, business and HR strategies that were effective in peacetime and the period of supply dominance in the labour market have lost their relevance and need to MATERIALS AND METHODS The methodological basis of the study is formed following the purpose and includes a legislative and regulatory component, a theoretical component represented by scientific articles, monographic studies, statistical data, and the results of expert assessments in the subject area.To reveal the essence of the highlighted issues, as well as to highlight their content, legal acts of the Verkhovna Rada of Ukraine (Law of Ukraine No. 2136-IX, 2022; No. 2232-XII, 2022; No. 2352-IX, 2022) and information of the Cabinet of Ministers of Ukraine (800 enterprises relocated..., 2023), official materials of the State Statistics Service of Ukraine (n.d.) were used, which reflect the trends and forecasts of the socio-demographic and economic situation in Ukraine caused by the war and focus on the feasibility of improving HR management systems necessary to develop changes in the field of security measures and corporate unity.Analytical information on the dynamics of relevant trends in the development of HR management (Schwartz et al., 2017) and a survey of employees on the main challenges of human sustainability (Cantrell et al., 2024) became the basis for developing directions for creating responsible and harmonious HR management systems for modern and future enterprises.be adapted to the new realities of life.Therefore, there is a need to address the efficiency of enterprises by interpreting business goals in the language of analytics and HR strategies, as well as the adequacy of tools for budget planning. For this reason, it is worth highlighting three main aspects of the issues outlined in this article.The first aspect is related to the peculiarities of staff work under martial law, awareness of the organizational challenges that enterprises and staff need to ensure their safe functioning and have the prospect of decent work and development in their independent Ukraine in the post-war period; the second aspect is related to the worldview and personal psychological and motivational settings of staff, the presence of stress during production and the establishment of resilience, which are integral elements of survival during the war; third aspect aims to maintain an appropriate level of balance between the human and technological development of enterprises to preserve jobs and positive dynamics of production and service provision.The answer to the question of which HR management models are relevant is not only of practical importance but also of theoretical significance, as it will allow to develop existing approaches to human productivity.The second and third aspects reflect the peculiarities of ensuring human dignity in remuneration and social responsibility of business entities and the formation of an atmosphere of corporate unity and trust in the process of managing the personnel of enterprises under the influence of current factors. The analysis of previous studies and publications confirms the existence of a wide range of scientific achievements that reflect the theoretical, methodological, technological and applied aspects of the outlined issues.Researchers underscore the problems of labour resources management, which is evident due to their role in increasing the innovativeness of enterprises, as discussed in the joint article by N. Mitsenko et al. (2022), which states that "human resource management is crucial to support an enterprise (organization) to improve efficiency, manage corporate and ethical issues that go beyond economic efficiency, and support the future development of the enterprise (organization) and the direction of its innovation activities".This idea is continued by O. Yakovenko (2022), considering the peculiarities of remote personnel management in modern conditions and focusing on the transformation of the personnel management process in terms of planning, recruitment, organization and direct management, motivation and communication.Following I. Gontareva et al. (2022), the HRM strategy is a structural element of the enterprise's strategic management system, highlighting "... five mandatory principles for those who want to win the war for talented employees, managers and make talents a competitive advantage" of enterprises.O. Naumova (2021) continues this idea and deepens it by systematizing the peculiarities of employee behaviour under different types of HRM strategies.The functioning of enterprises during the war, identifying the problems faced by management in personnel management and identifying ways to solve them have also been the subject of research by several scholars. The methodological basis of the research is a set of scientific methods that ensures the methodological integrity of the study, in particular: analysis and synthesis -for analysing modern features and reasons and generalizing their impact on the formation and development of HRM; systemic method -for revealing the essence of HRM through the prism of corporate culture, application of the latest technologies and growth of human productivity, that is, the unity of content (productive and creative human labour) and form (the process of development of corporate culture under the influence of new technologies), and for streamlining the areas of HRM; grouping and generalization -for developing In the context of global trends in the field of human capital, the directions for creating harmonious human resource management systems of enterprises based on the use of the method of expert judgement are proposed.The use of the statistical method and a survey made it possible to form a statistical base, highlighting informative data on the internal displacement of the population and enterprises under martial law, growing occupational stress and threats of job loss due to technological changes, the spread of practices and restrictions on the processing of employees' data, and trends in the development of microcultures. RESULTS AND DISCUSSION In modern business, human-centric models in the HR management system are dominant for the vast majority of business entities in various types of economic activity.Employee focus is becoming an axiom of business management through the prism of the obvious interdependence of the long-term development of a business project and the satisfaction and personal growth of employees, which guarantee the success of the business mission.This awareness is not only a requirement of the times but also a crucial guideline in the process of building intellectual and knowledge potential, attracting talent and their professional and career development. At the same time, the trends of modern socio-political and economic development in Ukraine and the world indicate the desire of enterprise personnel to expand the boundaries of personal freedom, and dynamic mobility and transfer the solution of various problems to the capabilities of digital technologies has been resisted in the realities of uncertainty during the war regarding job security, personal and collective security, economic and social dependence, psychological resilience, etc. From a scientific and managerial perspective, it is necessary to investigate how businesses and employees are adapting to the rapidly changing new reality of life.The study proposes to address modern trends in personnel management, which have become relevant in the context of the Russian-Ukrainian war, and which allow the development of theoretical and applied models for promoting unity and harmonious development of personnel. The activities of the staff of Ukrainian enterprises under martial law are characterized by several peculiarities: 1) changes in labour legislation (the possibility of employers increase working hours from 40 to 60 hours per week, or reduce them from 36 to 40 hours; limit rest for employees from 42 to 24 hours per week and the duration of annual leave to 24 days; change the start and end time of shifts; transfer employees to another job not specified in the employment contract without their consent without reducing their wages; refuse to grant unused vacation or regular leave to employees of certain enterprises; dismiss employees during temporary incapacity for work or leave due to the impossibility of providing employment due to the destruction of production or organizational conditions, means of production or property of the employer as a result of hostilities; at the same time, an employee may terminate an employment contract if there is a threat to life and health or the company is located in a hostilities zone; the ban on working on weekends, reduced working hours at night, reduced working hours on the eve of a public holiday, the need to notify employees of changes in essential working conditions and changes in remuneration conditions 2 months before their introduction, etc. are cancelled (Law of 3) internal displacement of the population (the number of officially registered internally displaced persons (IDPs) in Ukraine reaches 4.9 million people (Ministry of Social Policy of Ukraine, n.d.).According to a study by the International Organization for Migration (IOM), only 40% of IDPs in Ukraine are employed, while 14% are actively looking for work and 6% are inactive.This means that IDPs have significant labour potential that they are trying to realize.For comparison, the employment rate among non-displaced residents is 50% (Ukraine -internal displacement report..., 2023); 4) evacuation and relocation of enterprises (the Government of Ukraine recommended that enterprises from the territories outside the control of the Government of Ukraine relocate to safer locations.As of 5 May 2022, 500 businesses have moved to safer locations, of which 300 have resumed operations.In March 2022, applications were received from 1,266 businesses; and in June 2022, only 79 businesses applied for evacuation (Uvarova & Saprykina, 2023).At the same time, Deputy Minister of Economy of Ukraine Tetiana Berezhna stated that "since the beginning of the war, 800 enterprises have been relocated from dangerous regions with the support of the state.As of the beginning of March 2023, 623 enterprises are already operating in their new locations.Another 239 are looking for a convenient location or mode of transport.More than 650 companies that planned to relocate their production facilities refused to move due to the de-occupation of the territories where they are located.In addition, some businesses are now returning to their previous locations due to the improved security situation, particularly in Kharkiv, Chernihiv and Sumy regions.44 enterprises have already returned" (800 enterprises relocated..., 2023)); 5) mobilization of the population ("Up to 700,000 people are mobilized to the Armed Forces, up to 60,000 border guards, up to 90,000 National Guard, up to 100,000 National Police.In 2023, more than a million people in uniform will ensure the activities of the security and defence sector" (More than a million Ukrainians..., 2022)); 6) logistical collapse, unavailability of energy resources, and the elimination of markets for goods and services also pose significant risks to the work of employees and households, as noted by A. Kotsur et al. (2022). Along with national trends, the activities of personnel are also significantly influenced by current global trends that threaten human resilience.According to many researchers (Future Forum Pulse, 2023; Bracy, 2023), they include: • unrestrained professional burnout; • concerns that artificial intelligence (AI) will eliminate jobs; • the rapid development of the need for skills; • support for age-related and contract employees; • poor conditions for employees in their first job (employees in their first job make up about 80% of workers worldwide (Technology can help..., 2022); at the same time, according to the authors N. Dhingra et al. (2021), E. Frauenheim (2022) and M. Gonzales (2023), they feel underserved by education.They are least expected to have the opportunity to work on targeted programmes, receive significantly lower salaries and fewer days of paid leave, and are most likely to lack health insurance; • climate change and energy generation sources have a decisive impact on the workforce in most countries (according to P. Philip et al. (2022), more than 800 million jobs worldwide -25% of workers -are highly vulnerable to extreme climate conditions that affect, for example, access to a clean environment (water, air), as well as the economic results of energy transformation). Accounting for the above trends and new challenges, the main factors influencing the effectiveness of HR management under martial law and global expectations can be identified, as shown in Table 1. Hazards and emergency conditions Personnel may face various hazards, such as attacks by enemy forces, shelling and bombing, air raids, traffic restrictions or power, water, heat and other situations, which require an action plan to protect, secure and move employees and their mental and physical readiness for change in advance Communication It is particularly valuable as it can affect the performance of staff and their safety.It is important to have an effective and reliable communication system that will allow employees to transmit information quickly, clearly and concisely and ensure information hygiene in communication Stress Staff can face high levels of stress and emotional strain, which can result in reduced productivity and performance.This factor should be addressed, and action plans should be in place to reduce stress and support employees Discipline A key factor in the behaviour and work of staff and employees are obliged to follow the rules and instructions to ensure safety Leadership The team leader (manager) performs a crucial function and should have a clear strategy and action plans for the staff, as well as be ready to make quick and informed decisions in dangerous situations Restrictions on freedom of action and decision-making During martial law, restrictions may be imposed on the freedom of management decisions and actions for business managers, which narrow the possibilities, efficiency and effectiveness of management processes The factors listed in Table 1 require managerial decisions in the HR management system that are equivalent to these challenges and will ensure workplace safety, implement operational changes to work schedules and operational processes, develop stress management programmes (measures), organize training and retraining, and advise employees on various issues of life under martial law. In this context, conditions should be created for employees of enterprises and programmes should be developed (guaranteeing safety, supporting or motivating development, career growth, social responsibility) that will promote their safe work, forming value orientations of unity and trust, and increase the effectiveness of their work.Therefore, N. Mitsenko et al. (2022) propose HR management based on the concept of sustainable development, in compliance with certain principles, in particular: human resources development with a long-term perspective; flexibility; employee empowerment; fair and equal opportunities; external partnerships; employee care; profitability. Taking care of employees in times of war based on formalized uniform rules for all is the best way to build trust.Attitude to people, transparency of processes and decision-making, and salaries of employees at a level sufficient to provide for their families and restore their working capacity always create a culture of loyalty and morality in the company.According to N. Mitsenko et al. (2022), the creation of opportunities for employees to acquire various professional (technical) and interpersonal skills, social skills (volunteering, taking care of oneself, how to deal with stress, how to develop good nutrition habits, how to recover from work, etc.) is a complement to such a human-centred policy, which is especially relevant in the management of personnel of Ukrainian enterprises in the context of Russian aggression. There are also organizational and managerial situations in which formal and informal restrictions do not allow companies to meet their staff development needs.N. Markova (2015) identified several reasons for the emergence of contradictions in the sustainable development of personnel, in particular: • excessive requirements for employees who do not have an appropriate basis in terms of socio-economic justification of their need to perform their professional duties as set out in job descriptions; • non-compliance with the provisions of the laws and regulations of Ukraine governing relations in the field of hired labour management; • organizational, economic, technical and technological limitations of the enterprise to meet the needs of employees in their development; • organizational and bureaucratic barriers to the implementation of HR policy for staff development, due to low qualification of HR employees, limited financial resources for the implementation of current and future staff development plans, the complex organizational structure of enterprises with communication problems, etc; • low level of staff motivation to improve their professional level, and expand their area of competence, responsibility, and career growth. Addressing the reasons, limitations and opportunities of HR management in modern realities, it is advisable to formulate the main directions of promoting the development of personnel under martial law, as presented in Table 2. Support and compassion It is important that owners and management effectively support and empathize with staff during martial law. It is advisable to extend care for staff to employees' families.This will build trust and a sense of belonging and support for each other Financial incentives Management must provide employees with the necessary material resources and tools for effective work Learning and development In times of war, employees may need new knowledge and skills to perform effectively.It is therefore important to ensure that employees are trained and developed in specific areas and a comprehensive manner so that they are prepared for various challenges and responsibilities.In this context, human resources policy should also provide for the development of spiritual and emotional intelligence, which will stimulate a new quality of thinking, increase self-awareness of one's mission, enable self-control over negative emotions, and develop the ability to link causes and effects into a single whole Collaboration and communication Management must create conditions for cooperation and communication between employees.This can help ensure effective coordination and real-time problem-solving based on key communication functions (informing; communicating; joint decision-making; planning; performance review; division of duties; and joint work and responsibility).In addition, it is advisable to develop a communication culture, as common values, mental attitudes and stereotypes are a unifying factor that can ensure harmonious communication, an effective atmosphere of cooperation and consolidation Remote control Human resource management in wartime may be caused by the need to interact and work with remote teams located in different locations, and may also be necessary to ensure the safety of employees during the war Taking care of employees' well-being All enterprise processes need to be built in such a way that employees feel organic (natural), the culture is built around people (in small enterprises according to the traditions of family life), the development of social compensation (health insurance, cheaper food, corporate transport, etc.), which will reflect the social aspect of management technology Cultivating strength of mind and determination Developing staff morale, i.e., a state of mind in which employees become free (do not focus on everyday problems), rising above the daily routine of life and work.Developing the mental component of staff behaviour based on national patriotism and psychological resilience, the desire to overcome obstacles, unwavering will, courage and determination Publicity and transparency Developing a tradition of open discussion of problematic and painful issues of the enterprise, division, and project team.Highlighting only the images of leaders and winners in the life of an enterprise leads to a distortion of reality and wrong decisions.Honesty and transparency with staff add energy and fairness to processes and reinforces trust as a core value Economics, Entrepreneurship, Management, Vol.11, No. 1 HR management of enterprises... Table 2. Continued The areas listed in Table 2 ensure the formation of trusting relationships among the company's staff, help them adapt to new challenges and business conditions, establish new forms of internal communication and training, introduce new management and motivation systems, strengthen the comprehensive connection of employees with the company, and help attract potential candidates for employment, etc. А. Kotsur et al. (2022) believe that the main tasks of HR professionals in times of war are "adaptation of the HR management system and internal HR documentation to changes in legislation; ensuring the necessary number and quality of staff for effective operation in the context of large-scale external and internal migration and mobilization of the population; retention of existing staff; use of remote employment and additional functions of accounting and control of remote work; creation of conditions for the operation of evacuated enterprises in the new territory, as well as proper housing, social and living conditions for their employees". In 2017, Deloitte Consulting surveyed HR managers to identify the current priority areas of HR development and trends in their change until 2022 (Fig. 1).During this period, several forecasts at the time were confirmed by the realities of 2023.Performance management, HR analytics staff experience, etc. have become less relevant in the HR management process.Some of them are worth discussing in more detail.1. Ensuring human resilience.When people thrive, businesses thrive.To be sustainable, businesses need to create value for all the people associated with them.Moving towards sustainable human development implies a parallel of the environment/climate on health; creating "good jobs" for the economy (e.g., paying fair wages that meet a decent standard of living); positive impact on communities; contributing to equity for groups that have historically been marginalized by race, gender or other identities). At the same time, according to the survey, the relationship between employees and businesses is becoming increasingly complex amid large-scale contradictions in society and the business environment.Only 43% of employees believe that their working conditions at enterprises are better than at the beginning of their employment.Therefore, employees identified growing business stress and the threat of human jobs being replaced by technology as the main challenges for businesses that promote human sustainability (Fig. 2).change in the concept of human resource management.This requires systematic management decisions from enterprises to ensure that they create synergies of value for the people they affect in multiple dimensions.Based on the results of research conducted by Z. Ton (2017) and the analytical work of S. Cantrell et al. (2024), the following impact dimensions can be identified: staff (fair wages and long-term financial well-being; skills, employability and career opportunities; equity and addressing systemic causes of inequality; physiological and psychological safety; social, cultural and mental balance); potential employees (training and development of staff for future vacancies; enhancing human outcomes for external supply chain workers; enhancing human outcomes for contract or informal workers); society (improving public health, including the impact The "always on" economy enabled by digital technology Employers now being able to digitally monitor my work without my consent Lack of connection and belonging due to more remote or hybrid work Percent of workers answering the question: "Which of the following developments do you worry about as it relates to your work?Select all that apply". Increasing work stress leading to worse mental health 2. Increasing productivity and defining new indicators for assessing employee performance.As human performance takes centre stage, the question arises: are traditional performance measures sufficient?In the era of human-centric operations, modern database sources and artificial intelligence can help businesses move from measuring employee productivity to measuring human achievement.Therefore, to measure human performance, business results and human results should mutually reinforce each other.In this context, it is worth noting that "compliance with new global regulations on the use of personal data" entails excessive control over people through productivity indicators not only in their workplace but also in their private lives. Therefore, from the perspective of human dignity, the representative indicators proposed by S. Cantrell et al. (2024) representative indicators should be treated with great care and caution, especially those that can be combined: 1) business results (customer satisfaction, efficiency, revenue growth and profitability, time to market and speed to market, innovation and its implementation, quality); 2) human achievements (employment opportunities and career growth, fair remuneration, ownership and belonging, physiological and psychological safety, personal goals and their content, gaining experience and skills, happiness and well-being).While artificial intelligence can be necessary for assessing, analysing and improving business and employee performance, it can also be damaging to people and an entity's reputation if used inappropriately.Several researchers have expressed such views, drawing attention to the growing number of companies that are experiencing disputes with employees due to increased control.J.B. Leslie & K. Simmons (2023) note that "productivity paranoiathe fear that remote workers are unproductive -can lead to a state of surveillance and breach of trust, rather than management decisions aimed at achieving real workforce efficiency and productivity in modern workplaces". These trends are supported by enterprise surveys, which show that employees are loyal to the collection of additional data using familiar traditional technologies such as email or calendars but are mostly negative when it comes to data collected using new technologies such as wearable devices and headsets.Only 9% of employees like the collection of personal data using neurotechnologies, 23% of employees like location tracking technologies, 23% like external websites, and 28% like XR headsets (Cantrell et al., 2024).At the same time, contrary to these staff sentiments, the majority of managers intend to implement such technologies for data collection in the coming years (Fig. 3).Such an administrative approach can cause conflicts between management and employees and threaten trust within the enterprise and among stakeholders.At the same time, business owners and managers need to understand that people cannot and should not lose their rights and freedoms for the sake of economic relations or the most advanced technologies.Therefore, one of the main tasks of HR professionals is to establish responsible practices and limitations for the processing of personal data and artificial intelligence, as well as to plan preventive measures to address employee concerns about the use of new technologies in the process of collecting and analysing professional and personal confidential data. 3. The idea that instead of striving for a common corporate culture, businesses should create a "culture of cultures" tailored to the needs of business units or local teams, compatible with the values of the entire enterprise, is gaining traction in many HR academics.At the same time, most large enterprises have developed standards of a unified corporate culture, to which new employees should harmoniously join, realizing their intellectual, creative and professional potential based on the principles and models of behaviour established by the standards.It is worth focusing on the understanding and unambiguous interpretation of the following definitions: culture, corporate culture, organizational culture, labour organization, and microculture. Culture is a way and consequence of human activity that reproduces personal and social existence in all its manifestations.For instance, H. Zakharchyn (2011) notes that this can be used to interpret it as an ethnically specific paradigm of life creation, which, accumulating certain knowledge, meanings, creative abilities and skills of the people in material and spiritual values, sign systems, and so on, acts as a special way of being of a certain ethnic group concerning others.In this context, for an enterprise, culture is the "way things are done" in an organization -sustainable patterns of behaviour over many years, supported by shared practices and experiences, values and principles of the enterprise (Beyond productivity..., 2023). In the process of development of social and labour relations at the enterprise, according to T. Kytsak (2008), corporate culture manifests itself as a complex and multifaceted system of values, beliefs, business principles, norms of behaviour, traditions, which becomes an important intangible resource of the enterprise, as it provides social ties, communicative and informational communication, harmonizes relations between employers and employees and thus significantly affects the efficiency and competitiveness of the enterprise.At the same time, as argued by H. Zakharchyn (2009), unlike corporate culture, organizational culture is a system of relationships that have developed in an organization based on accepted values, basic ideas and norms of behaviour necessary to fulfil its mission.Therefore, the concept of "organizational culture" refers to the degree of organizational ordering of the operational and management process of an enterprise and is only one element of the organization's culture. Most researchers in their definitions name the organization's values as a component of corporate and organizational culture, so it can be argued that values are the core of corporate culture.Thus, when the scientific discourse refers to "common" values, such as innovation, teamwork, excellence and safety, which should form the basis for the development of microcultures in enterprises, it is a classic description of the constituent elements of organizational culture.Such value elements may have a significant differentiation in practice when it comes to multi-sectoral enterprises, as well as those with many thousands of employees structured into hundreds of units or geographically diversified. In this regard, the diversity of microcultures at an enterprise should be interpreted not as the presence or "fuelling" of the development of different worldview beliefs, spiritual moral and ethical principles and norms of behaviour, linguistic, cultural or national identities and traditions among the enterprise's personnel, but as the development of approaches to improving the organizational and operational processes of economic activity among multiple units and teams of the enterprise.This understanding will help develop the flexibility and efficiency of the company's divisions, a certain autonomy of functions and teams, unlock the intellectual and knowledge potential of employees and enable them to quickly adopt best practices and integration results.This trend is supported by the survey results, which show that almost 71% of respondents say that focusing on individual departments and creative teams as the best environments to foster culture, consistency and flexibility is key to the success of their teams.At the same time, 50% of managers indicate that an organization's culture is most successful when it has a uniform degree of variation (Cantrell et al., 2024). An important aspect of harnessing the potential of microcultures is to focus on their development.The fundamental focus of microcultures should be on the coordination and unification of their manifestations and practices (e.g., diversity of creative ideas, innovations, flexibility and individual work patterns, etc.) around a unified and recognizable corporate culture).The idea of cultivating different models of staff organization and practices to promote employee development, in the context of martial law and technological transformation, can have a positive impact on the experience and sustainability of enterprises.The reasons for the increase in the number of new models of labour organization are as follows: • expanding opportunities for hybrid or remote work.According to J. Wood (2022), 70% of employees worldwide prefer a hybrid structure, which will require expanding opportunities for team interaction outside of offices.At the same time, O. Pickup (2023) notes that employees of hybrid teams create closer ties within their units, but lose proper connection with the enterprise as a whole; • the emergence of operational practices specific to individual teams, departments or functions (differences in the type of employment (full-time or part-time), working conditions (on-site, hybrid, remote)); • complex and unclear procedures for approving innovative projects and processes, as well as management approaches and decision-making styles (command and control, decentralized, consensus-based, distributed); • the need to implement a special approach to attracting or retaining the best talent and intellectuals (preserving the unique culture of the newly acquired company in the process of mergers and acquisitions; ensuring freedom and flexibility); • changes to work schedules, shift duration, working week, etc; • the size of the company and its organizational maturity. The existence and development of many microcultures in one local enterprise in the national economy will inevitably lead to the threat of internal disruption (destruction) of the system and several negative consequences in the future, in particular • intensification of internal competition between such microcultures for the right to dominate, and thus overcome and displace the weaker ones; • formation of informal groups and environments that are not united by common business goals and objectives; • imbalance in the system of staff motivation and a diverse understanding of the meaning of "justice", "discipline", "order", "responsibility", "duty", "right", "freedom"; "patriotism", etc; • disorientation of managers and leaders of the enterprise between different local approaches and teams, which will make it impossible to agree and achieve common goals of the enterprise; • loss of the enterprise's identity and unity and recognition of the employer brand in the labour market and among stakeholders as microcultures spread (grow); • disintegration of horizontal and vertical links between structural units, as well as an imbalance between control and empowerment in the management system; • dynamic growth and sharpening of disagreements between employees of different departments (teams, branches), and between employees and management, which will impede the achievement of results for business and staff; • increased external instability around the enterprise, as a developed corporate culture -a "monoculture" -is a safeguard against vulnerability to stresses under martial law and various ideological, economic and technological challenges. Corporate culture should not be an internal barrier to achieving the company's goals and developing individual teams, but a source of inspiration and satisfaction for employees' needs to support the well-being and focus of staff development. Therefore, it is necessary to integrate the life cycle of each new talent into the corporate culture of the enterprise, and not vice versa, creating a microculture for the life cycles of individual talents.Therefore, recruitment processes, such as hiring, performance management, and development, should be clear to potential talent upfront so that they can quickly adapt to the company's unique corporate culture, functions, and location. Given the analysed trends in human capital, the creation of responsible and harmonious human resource management systems for modern and future enterprises should be carried out in the following areas: 1. Overcoming egocentrism, fostering a culture of balanced interests and harmonizing relationships.The excessive promotion (dominance) of theories and practices of developing global and personal values, and social profiles, and expanding the boundaries of personal freedom of personnel in the business environment and personnel management indicates the construction of a present and future society in which a person must constantly strive to satisfy personal interests.Satisfaction of personal interests at the company, not outside of it -in private life.This approach is the absolute opposite of aristocratic morality, marked with the seal of honour, which is characterized by the ability to act for the benefit of others (God, faith, nation, state, society, community, team, teammates, brothers, ladies, neighbours) contrary to personal interests.Therefore, in the strategic dimension, the goals and objectives of the enterprise HRM system should be to foster a culture of balanced interests between the participants of the business process -employees, owners of the enterprise and society.Human resources management should be focused on the constant search for ways in which they can complement and enrich the enterprise, personal and social life, without constantly seeking to gain personal benefits and enrichment at the expense of others. 2. Filling professional and personal space with thinking, not information.One of the problems of employees in post-industrial enterprises is the consumption of large amounts of unnecessary information that overwhelms their professional and personal space, leaving no room for thinking.The achievements of lifelong learning should be applied during active periods of work between regular professional development and staff training.Most employees involved in such constant processes of "learning for the sake of learning" do not have time to consciously apply the acquired knowledge and turn it into experience.However, over time, they realize that this knowledge is not useful to anyone, it has not changed the operational and management process at the enterprise (unit) and has not affected the level of their remuneration for their work.Such work is no longer productive.It creates an environment of simulacra, where "everyone has to be in business" and "be a leader".Thus, according to J. Baudrillard (1994), humanity is losing touch with reality and enter the era of hyperreality -when the picture is more important than the content, the document (diploma, certificate) is more important than knowledge, and the connection between objects, phenomena and signs has long been broken.In the activities of enterprises, a system is being scaled up, the law and purpose of which is the production and overproduction of only certain words, signs and symbols (rituals).In this system, "information devours itself " -it destroys communications, social, human, true and national.The creation and dissemination of "information for the sake of information" staged communications, creating "illusions of communication and understanding" as well as thinking.Meaningless "knowledge" is accumulated that does not help to create anything real, material or spiritual -it does not bring any benefit to the subject of its possession.This boom in "pumping information" on individual skills and functions is an attempt to direct the thinking of staff into a mechanistic plane and to show that individual elements of the whole process (enterprise) are linked by the principle of determinism and can exist independently of each other.That is, a specialist with a mechanistic (fragmentary) mindset sees only individual processes, events, and functions at an enterprise or in a department.Instead, it is necessary to develop holistic thinking among specialists so that they can build the most complete picture of what is happening at the enterprise, identify the links between production processes and management decisions, very different phenomena of the enterprise's external environment, situations, events -their holistic perception of the business entity. 3. Overcoming internal misunderstandings and a spirit of disagreement.One of the threats to the development of socio-economic systems (enterprises, states) is internal misunderstandings and a spirit of disagreement.Aggravation of contradictions between different generations of employees regarding the role and importance of corporate culture and attempts to make it a secondary issue in business development, aggressive imposition of "progressive views" on various topics (gender ideology and sexual identity, etc.) on the vast majority of Christian workforces, ignoring the state language in business documentation and during training and education for mono-ethnic teams, a high level of bureaucracy and an overly complex staff evaluation system, as well as an unjustified gap in salaries and remuneration between management and employees, are among the main causes of disruption in modern organizations.Therefore, the leading idea is to maintain the unity of the company's staff at all costs, based on the fundamental ideological principles of organizing the management process, without interfering with the private life of employees outside the company.This unity of teams should be built, first, based on national consciousness, for Ukrainian enterprises based on Ukraine-centricity, respect for the state, laws, national security, fair remuneration, moral values and national traditions of the region, decent attitude to people of different generations, and corporate culture. 4. Personal and team discipline and responsibility.Discipline helps learn fear, risk, danger, laziness, indifference, and bad habits control.Discipline can help develop a responsible personality and achieve harmony in professional and private life.Discipline is a combination of freedom and responsibility; it is the potential for inspiration. Therefore, it is important to create human resource management systems aimed at overcoming egocentrism and maintaining a culture of balanced interests among business stakeholders.Filling professional and personal space with thinking is a key aspect of staff development, as it helps to avoid information overload and the creation of an environment where knowledge does not translate into action.Overcoming internal misunderstandings and teaching the team discipline and responsibility are important steps in maintaining unity and efficiency in an organizational environment. CONCLUSIONS In the modern environment, companies are transforming their HR management systems to expand the scope of human capital development in line with the challenges.The analysed statistical information on long-term and chaotic migration of the population, internal displacement of enterprises and personnel, and mobilization of the population in Ukraine confirmed the need for new security measures and employee motivation and became the basis for the formation of directions for promoting staff unity and development under martial law.The emergence of new challenges for HR management caused by global trends in the field of human capital on the agenda today necessitates the development of the latest HR management tools.Accordingly, the article examines the role of people in achieving entrepreneurial success through the prism of ensuring human sustainability, productivity growth and the spread of microcultures in the workplace. The dynamics of the processes show that effective HR management involves cultivating different models of organizing staff work and practices to promote employee development, in the context of martial law and technological transformations, which should have a positive impact on the experience and stability of enterprises.Therefore, to develop positive trends in human capital, it is necessary to create responsible and harmonious human resource management systems for current and future enterprises.In the applied aspect, it is advisable to overcome egocentrism and foster a culture of balanced interests, fill professional and personal space with thinking, overcome internal misunderstandings and the spirit of disagreement, accustom yourself and your team to discipline, and avoid staff dependence on technology or economic relations on the way to progress.Prospects for further research will be the formation of an effective system of social guarantees and job security for different categories of personnel of enterprises in the context of the digitalization of the economy, ensuring conditions for their free professional development, and maximizing the creative potential of each employee. Source:Figure 1 . 38 % Figure 1.Dynamics of the most important trends in the development of personnel management Source: compiled by the authors based on J. Schwartz et al. (2017) Figure 2 . Figure 2. Workers identify top challenges to human sustainability Source: S. Cantrell et al. (2024)
2024-06-23T15:14:11.691Z
2024-05-02T00:00:00.000
{ "year": 2024, "sha1": "c65dcbf1655e9a20ad4851bf56bc663e08c4d76d", "oa_license": "CCBY", "oa_url": "https://eem.com.ua/web/uploads/pdf/Economics,%20Entrepreneurship,%20Management,%20Vol%20%2011,%20No%201,%202024-67-79.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a1d151434bdf1fbc6577ed7cad0effe54b52c7cc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
32837922
pes2o/s2orc
v3-fos-license
Perineal Trauma in Primiparous Women with Spontaneous Vaginal Delivery: Episiotomy or Second Degree Perineal Tear? study 85 board-approved parent study in healthy, nulliparous, continent pregnant women, attending the public health care system of Catalonia (northeast Spain). Women were selected at the beginning of their gestations and followed during pregnancy and postpartum with the aim of describing the natural history of urinary and anal incontinence, and identifying the associated risk factors. They were informed of the objectives and nature of the study and signed an informed consent freely, and withdrawal from the study at any time during follow-up was not precluded. A total of 1128 nulliparous pregnant women were included and delivery data were obtained from 938 of those recruited initially. The rate of vaginal delivery was 76.8% (n=720), with a total of 489 spontaneous deliveries (67.9% of the vaginal deliveries). Features of the parent study population and methodological details have been reported elsewhere (14). The current study is based on data obtained from the 489 women with spontaneous deliveries. Demographic and obstetrical variables included: maternal age, weeks of gestation, baseline body mass index (BMI), weight gain in pregnancy, induction, anaesthesia, cephalic position, episiotomy, type of episiotomy, perineal tears and degree, birth weight, and head circumference. Primary outcome measure For the purpose of the current study, perineal trauma was defined as any damage to the genitalia (skin, muscle, and fascia) during childbirth, either sponta neously or due to an episiotomy. Classification of perineal tears was first, second, third or fourth degree, according to the classification of Royal College of Obstetricians and Gynecologists: First degree: injury to perineal skin only; second degree: perineum and perineal muscles affected, but not involving the anal sphincter; third degree: injury to perineum involving the anal sphincter complex; and fourth degree: injury to perineum involving the anal sphincter complex and anal epithelium. Categorical variables were described as frequencies and percentages. Incidence rates of episiotomy and perineal tear, and their corresponding confidence intervals (95% CI), were calculated. The association of second degree tears with demographic and obstetrical variables was estimated through bivariate and multivariate analyses. Relative risks (RR) and odds ratios (OR), respectively, as well as their 95% CI were obtained. Only women with a singleton fetus were included in the analysis (8 twin pregnancies were excluded). A P≤0.05 was considered as statistically significant. Results A total of 720 vaginal deliveries were registered in the parent study (nulliparous cohort), 489 of which were spontaneous deliveries with the data below. The remaining were considered as assisted because of the use of forceps (n=136), spatulas (n=72), vacuum extraction (n=15), and breech presentation (n=4). In four cases the information was missing. About 91% (95% CI: 88%-93%) of primiparous women with spontaneous vaginal deliveries showed some degree of perineal trauma. Figure 1 depicts these data. The estimated episiotomy rate for all vaginal delivery in the nulliparous cohort was 72.8% (95% CI: 69.4%-76.1%) and the perineal tear rate was 31% (95% CI: 27.3%-34.7%). The occurrence of tears, with and without episiotomy, in spontaneous vaginal deliveries is summarized in Table 1. 87.5% of the diagnosed tears in spontaneous vaginal deliveries occurred in the absence of episiotomy (Table 1). According to our data, nulliparous women with spontaneous deliveries who did not undergo an episiotomy were 9 times more likely to present a tear (any grade) than those who received an episiotomy (RR = 9.6, 95% CI: 6.3%-14.6%, P<0.001). The estimated episiotomy rate for spontaneous vaginal delivery in the nulliparous cohort was 63.4% (95% CI: 59.0%-67.8%) and the perineal tear rate was 35.3% (95% CI: 30.7%-39.9%). In spontaneous vaginal deliveries with episiotomy, a high proportion of primiparous women (92%; 95% CI: 88.5%-95.5%) did not have a recorded tear compared to 76.4% (95% CI: 69.8%-83.0%) of those without an episiotomy. On the other hand, the rate of tear without episiotomy was 86.3% (95% CI: 80.6%-92.1%). In the nulliparous cohort, the rate for an intact perineum af- ter a spontaneous delivery was estimated to be 9.4% (95% CI: 7.0%-12.5%). When the association between second degree tears and some demographic and obstetrical variables were assessed, only episiotomy reached statistical significance (P<0.0001), revealing the protective effect of episiotomy to prevent a perineal trauma in primiparous women. When considering spontaneous deliveries, the occurrence of episiotomy was the only significant variable in bivariate analyses (RR=0.20; 95% CI: 0.12%-0.33%; P<0.0001) showing a protective effect; moreover, the risk of second degree perineal tear attributable to the absence of episiotomy was 80.0%. In multivariate analyses, episiotomy remained as the unique factor statistically associated with second degree tears (OR=0.035; 95% CI: 0.012%-0.097%; P<0.0001); that is, the absence of episiotomy significantly increased the risk of that type of perineal trauma (OR=28.57; 95% CI: 10.31%-83.33%; P<0.0001) in primiparous women. These estimates were adjusted by BMI, maternal age and birth weight ( Table 2). Discussion Ninety-one percent of primiparous women in this cohort study who had spontaneous vaginal deliveries experienced some form of perineal tear, whether with episiotomy or spontaneous perineal tear (or both). The high rate of perineal trauma is an important fact if we consider that postpartum morbidity is directly related to the extension and severity of perineal trauma (15). A clear and specific evidence based recommendation on usage of restricted episiotomy exists (1), although there is no consensus as to what is considered appropriate as far as the rate of episiotomy. Carroli states that a rate of more than 30% would not be justified in the context of restrictive use of episiotomy considering all vaginal deliveries (16). However, the absence of an episiotomy does not guarantee an intact perineum, and in most primiparous normal deliveries in which episiotomies are not performed the result is usually a second degree perineal tear. The restrictive use of episiotomy that leads to a decrease in this type of intervention could have long-term consequences similar to those that were trying to be avoided by the systematic performance of this procedure (12). The most important limitation found in comparing groups and rates of episiotomies lies in the lack of sufficient data exclusively for primiparous women. In this sense, our work is valuable as it presents a cohort of primiparous women with spontaneous vaginal delivery. Furthermore, most studies focusing on primiparous women include a relatively small sample (4,17,18); according to these studies, the rate of episiotomies for primiparous women (including spontaneous and instrumental deliveries) following a selective episiotomy practice ranges between 20.9% (19) (20). There is a correlation between the percentage of first and second degree perineal tears (non-severe perineal trauma) and the rate of episiotomies, as this type of perineal tear is higher when no type of episiotomy is performed (2). In fact, with the adoption of a restrictive episiotomy practice the interest in studying risk factors and preventing spontaneous perineal trauma has increased. The perineal tears or lacerations that require suture have increased gradually as the amount of episiotomies decrease; in one USA study 41% of women who underwent vaginal deliveries in 2003 suffered spontaneous tears (5). In a recent study, the rate of perineal lacerations in primiparous with no episiotomy was 56.7% and suture was necessary in 30% of them. The only factors associated with increased risk of need for suture were primiparity and instrumental delivery (21). Approximately two-thirds of the primiparous women with spontaneous vaginal deliveries experienced some degree of trauma that affected the perineal muscles, whether was caused by episiotomy or by a spontaneous second degree tear. Thus, only one-third of the primiparous women in this study with spontaneous vaginal deliveries presented first degree or no perineal trauma. Considering perineal tears, three-fourths were diagnosed in the absence of an episiotomy, resulting almost all in non-severe perineal trauma. Consequently, our study has found evidence of the clear protective or preventive effect of episiotomy with respect to second degree tears for primiparous women. This association with a decreased risk of spontaneous perineal trauma was also evidenced in other studies (22,23). Potentially as much as 80% of the second degree tears in spontaneous deliveries could have been prevented or avoided if an episiotomy had been performed. Women who did not undergo an episiotomy were 28.5 times more at risk of presenting second degree tears than those who did undergo an episiotomy. No additional obstetric factors related to second degree perineal tears (other than episiotomies) were collected in our study which impedes the identification of other risk factors for second degree perineal lesions in which a preventive episiotomy would be justified. It is important to point out that these variables were not considered as an objective of this study, a factor which presents an important limitation. In a Swedish study, factors associated to this type of trauma were identified as being perineal edema, high fetal weight, advanced maternal age and pro- A r c h i v e o f S I D www.SID.ir longed delivery time (>60 minutes) or shortened (<30 minutes) (24). Another study demonstrated a significant association between the head circumference of the infant and trauma extending into the perineal muscles (second degree or deeper) in nulliparous women, although the effect was modest (25). The issue to be considered is whether it is beneficial to reduce the rate of episiotomies in primiparous women at the expense of an increase in spontaneous perineal trauma. Based on current evidence, it is not possible to establish concrete protocols on when an episiotomy is indicated in a spontaneous vaginal delivery. Although in common obstetric practice, episiotomy may be more closely related to different professional styles, local recommendations or experience, training and individual preference, than to the individual differences of each woman at the time of delivery. Thus, the rate of perineal trauma should be minimized as much as possible with restrictive use of episiotomy, but also assuming and informing primiparous women of the high risk muscular structures being affected following their first vaginal delivery (as a consequence of an episiotomy or a spontaneous perineal tear). Conclusion The purpose of this study was to compare the rate of perineal trauma and tears and whether episiotomy had a protective effect in primiparous women with spontaneous vaginal delivery. Most primiparous women had documented perineal trauma which, although not considered severe, may affect the muscular perineum structures. The absence of episiotomy was the only variable independently associated with second degree perineal tears; therefore, episiotomy showed a clear protective effect on this type of spontaneous perineal trauma. Ethical issues The participants were informed of the objectives of the study and signed an informed consent freely. The protection of privacy of those participating in research was also considered. Financial support Public health research grant. Conflict of interests None declared.
2017-08-27T13:13:45.486Z
2015-02-15T00:00:00.000
{ "year": 2015, "sha1": "f7b54d0af76e2cf49ea897b50c334f452555599e", "oa_license": null, "oa_url": "http://www.ijwhr.net/pdf.php?id=107", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b40d7c80324c17998ecad2f2736b3a99eb2a0102", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15309849
pes2o/s2orc
v3-fos-license
Decline of Yangtze River water and sediment discharge: Impact from natural and anthropogenic changes The increasing impact of both climatic change and human activities on global river systems necessitates an increasing need to identify and quantify the various drivers and their impacts on fluvial water and sediment discharge. Here we show that mean Yangtze River water discharge of the first decade after the closing of the Three Gorges Dam (TGD) (2003–2012) was 67 km3/yr (7%) lower than that of the previous 50 years (1950–2002), and 126 km3/yr less compared to the relatively wet period of pre-TGD decade (1993–2002). Most (60–70%) of the decline can be attributed to decreased precipitation, the remainder resulting from construction of reservoirs, improved water-soil conservation and increased water consumption. Mean sediment flux decreased by 71% between 1950–1968 and the post-TGD decade, about half of which occurred prior to the pre-TGD decade. Approximately 30% of the total decline and 65% of the decline since 2003 can be attributed to the TGD, 5% and 14% of these declines to precipitation change, and the remaining to other dams and soil conservation within the drainage basin. These findings highlight the degree to which changes in riverine water and sediment discharge can be related with multiple environmental and anthropogenic factors. Scientific RepoRts | 5:12581 | DOi: 10.1038/srep12581 precipitation throughout the river's watershed, 40% by soil conservation practices and 30% by reservoir retention 18 . Similarly, half of the sediment decrease in the Mississippi River can be ascribed to land conservation and levee construction 15 . The Yangtze River (Fig. 1), the subject of this paper, ranks ninth globally in terms of drainage area (1.8 million km 2 ), third in length (6,300 km), fifth in water discharge (900 km 3 /yr), fourth in sediment flux (500 Mt/yr before its decline in the 1970s; Mt: million tons), and first in watershed population (450 million) 2,19,20 . As such, it can be considered the largest and most important river in Asia. More than 50,000 dams have been constructed throughout the Yangtze's watershed 16 , and in 2003, the world's largest hydropower project, the Three Gorges Dam (TGD) 13 , began its operation. Several post-TGD studies have identified other drivers that have affected water and sediment discharge [21][22][23][24][25][26][27][28][29][30][31] , but to date there have been few studies that have attempted to quantify relative importance of these drivers in the post-TGD period, which is the subject of this paper. Sediment flux at Datong between 1950 and 1968 averaged 507 Mt/yr, but after 1968, due to dam construction, it declined to an average of 320 Mt/yr during the pre-TGD decade; after closing of the TGD, it declined to 145 Mt/yr. That is, mean annual sediment flux at Datong decreased by 187 Mt/yr between 1950-1968 and 1993-2002 . Collectively, more than 95% of the sediment decrease at Datong resulted from the decreased sediment supply from the upper reaches (at Yichang, 40 km downstream from the TGD). Sediment flux is clearly a function of water discharge. The post-TGD trend-line was far below the pre-TGD trend-line (Fig. 3B), suggesting the impact of driving factors other than water discharge. Impacts from Various Environmental and Anthropogenic Drivers. Temporal changes in Yangtze annual water and sediment discharge can be related to both natural and man-made changes. Precipitation and evapotranspiration (as reflected by temperature change) are the most obvious natural changes to the environment (because of the basin's large size, it is highly doubtful that episodic events, such as floods or earthquakes, would have as much affect as they would on smaller watersheds 2 ). The effects of human activities are more varied and can have much greater short-term and lasting impact. Water consumption or land-use change can have significant impact on both water and sediment discharges, but the blocking of sediment transport by dams can have far greater impact. Below we discuss the impacts of these various natural and anthropogenic drivers. Impacts from precipitation change. On average, post-TGD basin-wide precipitation was 3% lower than in the pre-TGD long-term period and 6% lower than in the pre-TGD decade; precipitation in 2011 was the lowest since the 1950s, and 2006 and 2009 were also very dry years (Fig. 2B 5 ). The pre-TGD decade was a wet period, whereas the post-TGD decade has been a dry period, most reflecting the decadal periodicity of precipitation in the Yangtze Basin 29 . For the entire Yangtze Basin (upstream of Datong), 69% and 61% of the post-TGD decreased water discharge relative to the pre-TGD periods 1950-2002 and 1993-2002, respectively, can be attributed to the decreased precipitation (Fig. 4A), but decreased precipitation explains only 5% and 14% of the post-TGD decreased sediment flux relative to the pre-TGD periods 1950-1968 and 1993-2002 (Fig. 4B). The relative impact of precipitation on annual water and sediment discharge, however, varied temporally. Precipitation change, for example, explains > 90% of the change in water discharge at Datong in the years 2005, 2009, 2010 and 2012, but < 20% of the change in 2004 or 2008 water discharge. Similarly, more than 30% of the sediment decrease at Datong in 2011 can be attributed to decreased precipitation, whereas sediment decrease in 2008 and 2010 cannot be explained well by precipitation change (Supplementary Tab. S2-3, online). The positive relationship between annual water discharge and precipitation is obvious (Fig. 3), since flow is derived primarily from precipitation. Precipitation affects sediment flux mainly through soil erosion and sediment transport. Using data from the Changjiang Water Resource Committee (CWRC), we find that there is a significant positive correlation between soil erosion and water discharge (correlation coefficient r = 0.83) in the Yangtze Basin over the post-TGD period. In the upper basin, the river's main sediment source 32,33 , for instance, the precipitation (Fig. 2B 1 ), water discharge ( Fig. 2C 1 ), soil erosion and sediment flux (Fig. 2C 1 ) in 2006 and 2011 were all the two lowest seen in the post-TGD period. After the closure of the TGD, downstream channel erosion became the major source of sediment flux to the sea 34 , particularly in 2006 and 2011. In 2006 and 2011, more than 70% of the sediment flux at Datong derived from river erosion. The downstream erosion is positively correlated with precipitation/water discharge. The basin-wide precipitation, water discharge at Datong, and downstream erosion in 2010 and 2012, for instance, were the two largest, whereas the precipitation, discharge and erosion in 2006 and 2011 were the two lowest in the post-TGD decade, based on sediment budget (Supplementary Tab. S1, online). This positive correlation also reflects the impact of reduced precipitation on sediment flux. Impacts from temperature change. Temperature change is a global environmental issue. In river basins, temperature controls evapotranspiration, snow/glacier melt, and vegetation cover, and thereby affects water and sediment discharges. Temperatures within the Yangtze Basin have increased significantly since the mid 1980s in spite of the interannual variability ( Fig. 2A land temperature increase of the same period 35 . Although it is difficult to quantify this effect in the present study, the covariance between temperature and evapotranspiration is illustrated by the example in 2006, when the basin-wide temperature was the highest ( Fig. 2A 5 .) and the water and sediment discharges were extremely low ( Fig. 2C 5 ). The discharge in 2006 was unusually below the trend line between water discharge and precipitation (Fig. 3A), suggesting the important factors other than precipitation that strongly affected the discharge. In 2006, the combined effect of reduced precipitation and TGD can explain only 58% of the water discharge decrease. Water storage in reservoirs excluding the Three Gorges Reservoir (TGR) and water consumption in 2006 were lower than the average over the post-TGD decade (Fig. 2E,F). Assuming that water-soil conservation in 2006 changed the discharge at the same rate as the post-TGD average, more than 10% of the water discharge decrease at Datong in 2006 or more than 2% of the total water discharge decrease in the post-TGD decade relative to the pre-TGD decade can be attributed to higher temperature (Supplementary Tab. S2, online). In the upper basin of the Yangtze River, the rapid temperature increase over the post-TGD decade ( Fig. 2A 1 ) may have accelerated glacial and permafrost thaw. The glaciers in the source area of the Yangtze River totalled 89 km 3 before the 1980s 36 . Over the past three decades, the area and volume of glaciers have decreased by 18% and 20%, respectively 37 . The rate of ice melt was 0.07 km 3 /yr before 2000 38 and 0.99 km 3 /yr after 2000 39 . Assuming that the glacier melt increased by 0.9 km 3 /yr from the pre-TGD to post-TGD decades, the increased meltwater would have contributed 9 km 3 to the water discharge over 2003-2012. This discharge increase amounted to 7% of the discharge from the source area (8% of the Yangtze drainage basin) of the same decade, and resulted in a 0.7% offset of the water discharge decrease at Datong from the pre-TGD to post-TGD decades. We can therefore conclude that the net impact of basin warming on water through evapotranspiration and thawing of glaciers and permafrost has been ca. 1% of the discharge decrease at Datong. Climate warming may have also affected the sediment flux. Higher temperatures increase the rate of rock weathering 40 . In the source area of the Yangtze, the temperature in September-April is below 0 °C. Basin warming may have shortened the snowfall season and increased the rainfall disturbance of the surface soil. Thawing of glaciers and permafrost may have enlarged the surface erosion area. However, as shown above, higher temperatures may have increased evapotranspiration and resulted in lower water and sediment discharge. Considering the impacts of various aspects, ca.1% of the sediment flux decrease at Datong in the post-TGD decade can be attributed to higher temperature. Impacts from water withdrawal/consumption. Annual water usage/consumption in the Yangtze Basin began being reported in 1997 (Fig. 2F). Mean annual water usage increased by ~20 km 3 /yr, and mean annual water consumption by ~3 km 3 /yr between the pre-TGD and the post-TGD decades, which can explain ca. 2% of the reduced water discharges at Datong (Fig. 4A 2 ). Although an increasing trend in annual water consumption can be qualitatively expected for the period before the pre-TGD decade, considering the rapid increase in population since the 1950s and the rapid increase in economic activity in the Yangtze Basin since the 1980s 20 , it is difficult to quantify its impact on water discharge because of the lack of available data. There are two types of impacts of water withdrawal on sediment discharge. The first impact is the product of suspended sediment concentration (SSC) and amount of diverted water. Considering that 30% of the water diversion has been from reservoirs with very low SSC (data from CWRC) and that suspended sediments in rivers are mainly distributed near the bottom 41 where water diversion is not likely conducted, the increased sediment diversion from the pre-to post-TGD decade was estimated to be less than 2 Mt/yr. Because most of these sediments, if had not been diverted, would be trapped in reservoirs, increased sediment withdrawal probably has reduced the sediment flux at Datong by less than 0.5 Mt/yr. The second impact of water withdrawal is the reduction of ability to transport sediment, which is critical to the downstream erosion. The impact of water diversion on the downstream erosion, estimated using the empirical relationships of sediment transport (Equations 16-21), was less than 1 Mt/ yr. Subsequently, < 1% of the decreased sediment discharge at Datong from the pre-to post-TGD decades can be attributed to water withdrawal. Impacts from water-soil conservation. Land-use change affects vegetation cover and thereby controls sediment yield 42 . The rate of sediment yield (RSY) in croplands is much higher than that under natural conditions 43 . In the Yangtze Basin, forest cover decreased from 80% about 3000 years BP, to 60% in 1000 years BP, and to 17% in the 1980s 44 . The surface erosion in the Yangtze Basin increased from 364 × 10 3 km 2 in the 1950s to 562 × 10 3 km 2 in the 1980s 45 . Since then, however, water-soil conservation programs and afforestation have been conducted. The area of water-soil conservation, for instance, increased from ca. 150 × 10 3 km 2 in 1993 to 300 × 10 3 km 2 in 2012; on average, the cumulative area of water-soil conservation increased by more than 50% from the pre-to post-TGD decades (Fig. 2D). As a result, the vegetation cover throughout the entire Yangtze Basin has increased by 14% in the recent decade 44 . As a result of soil conservation, the area of surface erosion decreased to 520 × 10 3 km 2 in the 1990s 46 . Meanwhile, the percentages of very-low-grade and low-grade surface erosion have increased from 38% to 40% and from 34% to 40%, respectively, whereas the percentages of middle-grade, high-grade and very-high-grade surface erosion has decreased from 18% to 15% , from 7% to 3%, and from 3% to 1%, respectively 45,46 . The RSYs of the very low-grade, low-grade, middle-grade, high-grade and very-high-grade surface erosion are Using the post-TGD correlation between RSY and water discharge, and employing the water discharge data in the pre-TGD decade, the RSY is predicted to be ca.1.1 Bt/yr. The difference between 1.5 Bt/yr and 1.1 Bt/yr suggests the RSY decrease due to soil conservation. In the Yangtze Basin, the ratio of sediment flux to sediment yield is ca. 0.4 47 . Thus, the soil conservation presumably decreased the sediment discharge by ca. 160 Mt/yr from the pre-to post-TGD decades, before most of the sediment was deposited in cascade reservoirs and lakes. Approximately 80% of the water-soil projects in the Yangtze Basin were conducted upstream of the TGD 42 . Based on the sediment budget, sediment annually trapped in the reservoirs upstream of the TGR is comparable with that in the TGR. Considering the trap efficiency of the TGR (80%), the soil conservation in upstream drainage basin of the Yangtze River would have resulted in a ca. 15 Mt/yr decrease in sediment flux from the TGR towards Datong from the pre-to post-TGD decades. Approximately 20% of the soil conservation projects in the Yangtze Basin were conducted in the middle and lower reaches, particularly in the basin of the Danjiangkou Reservoir in the Hanjiang River. Because ca. 90% of the sediment into the Danjiangkou Reservoir is trapped 33 , as also seen in the Dongting and Poyang Lakes 48 , the soil conservation likely has decreased the sediment flux at Datong by ca. 3 Mt/yr. In conclusion, soil conservation in the Yangtze Basin could explain ca. 10% (18 Mt/yr) of the sediment flux decrease at Datong from the pre-to post-TGD decades. Considering the increasing trend in surface erosion from the 1950s to the 1980s and using the regression relationship between sediment yield and surface erosion 43 , the mean RSY during the period 1950-1968 was estimated to be ca. 1.6 Bt/yr, or 0.1 Bt/yr higher than in the pre-TGD decade and 0.7 Bt/yr higher than in the post-TGD decade. We can then estimate that soil conservation decreased the sediment flux at Datong by ca. 21 Mt/yr, which can explain ca. 6% of the decreased sediment flux at Datong between 1950-1968 and 2003-2012. Vegetation affects water discharge through transpiration. The afforestation program in the Yangtze Basin have increased vegetation cover and thereby decreased water discharge. Within the Minjiang basin, precipitation increased by 3% but water discharge decreased by 2% from the pre-to post-TGD decades. Based on the close correlation between water discharge and precipitation and considering the water consumption and water impoundment in reservoirs in this sub-basin, a water discharge decrease of ca. 4 km 3 /yr can be attributed to water conservation projects. Using the same method, we estimated the influence of afforestation on water discharge for other sub-basins. For the entire basin at Datong, the total impact of afforestation on the water discharge was a ca. 37 km 3 /yr decrease, which explains approximately 29% of the discharge decrease at Datong from the pre-to post-TGD decades (Fig. 4A 2 ). The impact of afforestation on the water discharge decrease between 1950-2002 and 2003-2012 is difficult to quantify using a water budget because data on water usage/consumption before the 1990s are unavailable. In this case, we estimated that water-soil conservation and increased water consumption together explain ca. 16% of the water discharge decrease (Fig. 4A 1 ). We found that the impact of water-soil conservation on sediment discharge had mainly occurred in the western and northern sub-basins, the main sediment source for the Yangtze River, whereas the impact of water-soil conservation on water discharge had been mainly in the southern sub-basins. Impacts from the TGD. TGD has had two major effects on water discharge. Firstly, continuing water impoundment in the TGR meant that water storage increased from 14 km 3 It should be noted that seasonal water impoundment/release (more than 20 km 3 ) has had a much greater effect on shorter-term discharge than the impact on annual discharge 29,49 . Between 2003 and 2012, 80% (182 Mt/yr) of the sediment from upstream was trapped behind the TGD 34 . Although the sediment supply from the basin upstream from the TGR has decreased between the pre-and post-TGD decades 22,23 , it was relatively minor compared to sediment retention in the TGR 34 (Fig. 2E). Thus, The water storage in all other large and mid-sized reservoirs increased by 51 km 3 from the pre-to post-TGD periods. Since this number does not include contribution of the numerous small reservoirs, total water storage probably has increased by ca. 60 km 3 , or 1.5 times greater than that of the TGR. Assuming the increased evaporation is proportional to increased water storage, because of similarity in reservoir bathymetrics and evaporation rate, we attribute ca. 9% and 5% of the post-TGD decreased water discharges at Datong (relative to 1950-2002 and 1993-2002) to water impoundment and increased evaporation in reservoirs other than the TGR (Fig. 4A). Collectively, then, water impoundment and evaporation from all dams along the Yangtze system may account for 8% relative to the pre-TGD decade. Between periods 1950-1968 and 1993-2002, mean sediment flux at Datong decreased by 187 Mt/yr, more than half of the total decrease coming since 1969 (Supplementary Tab. S1, online). This decrease is mainly ascribed to dam construction, because soil conservation, another major cause of the recent decline in sediment flux in the Yangtze River, did not begin until the end of the 1980s 22,23,33,47 . From the pre-to post-TGD decades, sediment flux from the Wujiang River (Fig. 1B) decreased by 14 Mt/yr, which was mainly attributable to several large reservoirs (with a total storage capacity of ca.13 km 3 ). Sediment flux from the Jinshajiang, Minjiang and Jialingjiang Rivers, as gauged at Cuntan Station (Fig. 1B), decreased by nearly 40% between the pre-TGD (330 Mt/yr) and post-TGD decades (190 Mt/yr). Because the main sediment yield areas in these rivers have been the key regions for soil conservation, the decreased sediment flux at Cuntan can be partly attributed to the soil conservation projects. Without detailed data for separating the sediment impacts of the dams and soil conservation, we estimate that approximately half of the sediment decrease from the Jinshajiang River was caused by dams, the other half by soil conservation. A large reservoir (5.8 km 3 storage capacity) constructed on the major tributary of the Jinshajiang River, for instance, began operation between 1998 and 2000, after which, sediment discharge at the dam site decreased from 27 Mt/yr to ca. 7 Mt/yr after the dam construction. In addition to this dam, many large dams were constructed in the middle and lower Jinshajiang River in the latter half of the post-TGD decade, after as much as or more than 90% of the Jinshajiang sediment may have been trapped in their reservoirs 34 . Because the TGR trapped 80% of the upstream sediment over the 2003-2012 decade 34 , these sediments, if not retained behind dams upstream of the TGD, would mostly have been trapped in the TGR. Said another way, these dams have decreased the sediment outflow from the TGR by ca. 17 Mt/ yr. Approximately 10% (2 Mt/yr) of this sediment would have flowed into Lake Dongting. Between the pre-and post-TGD decades, sediment flux from the Hanjiang River was unchanged, and sediment flux into Lakes Dongting and Poyang form their tributaries decreased by 15 Mt/yr, at least partly due to dams. Considering that most sediment into these lakes would be trapped there 45 , dams other than TGD can explain approximately 10% (18 Mt/yr) of the reduced sediment discharge at Datong over the post-TGD decade. This estimation of ca. 10% impact from other dams is in agreement with the difference between the total impact (100%) and the impacts from the TGD (65%), precipitation (14%), soil conservation (10%), water withdrawal etc (1%). Collectively, ca. 57% (205 Mt/yr) of the total decrease in sediment flux at Datong since 1969 (362 Mt/yr) can be attributed to dams other than the TGD (Fig. 4B). Other factors. Other factors include urbanisation, road construction and earthquakes etc. Urbanisation and road construction in China have greatly increased in the most recent decade. When constructing buildings and roads, the soil is exposed to rainfall, and a greater sediment yield can be expected. After the construction, the natural surface is paved with concrete, which decreases water infiltration and increases runoff coefficient. Over the post-TGD period, several violent earthquakes occurred in the Yangtze Basin including the 8.0 magnitude Wenchuan Earthquake that resulted in the deaths of 70 thousand people. The earthquakes generated mudslides and may have increased local sediment yields. However, in view of the basin scale of the Yangtze River, these factors are limited to small regional scales, and their comprehensive impacts on the annual water and sediment discharges are probably very minor compared with the impacts of the aforementioned factors. Conclusions Over the first decade following the construction of the TGD in 2003, the mean annual water discharge from the Yangtze River to the sea was 7% lower than that during the period 1950-2002 (and 13% lower than that during the period 1993-2002); the mean sediment flux decreased by 71% relative to 1950-1968 (before decline) and decreased by 55% compared with 1993-2002. However, these declines in water and sediment discharges were attributable not only to the TGD but also to many other natural and anthropogenic factors. In fact, the TGD can explain only 6% (and 3%) of the above water discharge decrease. Approximately 70% (and 60%) of the water discharge decrease was attributable to precipitation decrease. The post-TGD decade happened to be a dry period and the pre-TGD decade a wet period. Other reservoirs constructed over the post-TGD decade have a combined storage capacity of 1.5 times larger than the TGR and were more important in the water discharge decrease. Water conservation and water consumption were comparable with dams (between 1950-2002 and 2003-2012) or even Scientific RepoRts | 5:12581 | DOi: 10.1038/srep12581 more important than dams (between the pre-and post-TGD decades) in impacting the water discharge decrease (Fig. 4A). In contrast, the TGD was a dominant cause of the sediment flux decrease. Although thousands of dams other than the TGD can explain 57% of the total decrease in sediment flux since 1969, the TGD alone can explain 31% of this total decrease. In addition, 65% of the pre-to post-TGD decrease in sediment flux can be attributed to the TGD. In comparison, 6% and 5% of the total sediment flux decrease were ascribed to soil conservation and precipitation decline, respectively, and 14%, 10% and 10% of the pre-to post-TGD decreased sediment flux were attributable to precipitation decline, other dams and soil conservation, respectively (Fig. 4B). We can therefore conclude that the decline in river water and sediment discharges observed after construction of a large dam can be generated by multiple natural and anthropogenic factors, and comprehensive evaluation is needed for both knowledge and management. Study area. The Yangtze River and the TGD. The Yangtze River originates on the Qinghai-Tibet Plateau at 5100 m above sea level and flows eastward to the East China Sea. The drainage basin is located between 24 °N and 36 °N and is characterized by a subtropical, warm and wet climate (Fig. 1B). The basin-wide precipitation average is ca. 1050 mm/yr, 70%-80% of which is delivered from May to October. About half of the precipitation is lost to evaporation 25,50 . The Yangtze Basin is composed of seven sub-basins, the Jinshajiang, Minjiang, Jialingjiang, Hanjiang and Wujiang Rivers, and the tributaries converging at Lakes Dongting and Poyang (Fig. 1B). Precipitation ranges from 730 mm/yr in the Jinshajiang basin (northwestern) to 1560 mm/yr in the Lake Poyang basin (southeastern). In contrast, most sediment is derived from the northern and western sub-basins 25 . The TGD was constructed at the outlet of the Three Gorges. Yichang gauging station is 40 km downstream from the TGD and 4500 km downstream from the source waters, and receives water and sediment from a basin area of 1,000,600 km 2 (Fig. 1). Upstream from Yichang, the basin is mainly mountainous, and the river channel is steep and cuts through deep valleys. Downstream from Yichang the terrain consists of flood plains and low hills, and the river becomes wide with a gentle longitudinal slope 51 . About 50% of water and 80% of sediment in the Yangtze originate from the basin upstream of Yichang 33 . The TGR extends more than 600 km under normal operating condition with a surface area of 1100 km 2 . Prior to TGD construction, sediment discharge from upstream was high, and a large amount of sediment was deposited in the middle reaches of the Yangtze. After the closure of the TGD in 2003, significant downstream erosion was observed 34 . Methods Datasets. Water and sediment data have been collected at twenty-six gauging stations (Fig. 1B) by the Changjiang Water Resource Committee (CWRC) 52 . Suspended sediment discharge is used to represent the total sediment load because the Yangtze River bed load is less than 1% of the suspended load 53 . Data for water impoundment in reservoirs, water usage/consumption, water level and cross-channel topographical profiles were also collected by the CWRC. Monthly air temperature and precipitation data recorded by China Meteorological Administration (CMA) since the 1950s were compiled from 87 gauging stations well distributed within and immediately surrounding the Yangtze Basin 54 (Fig. 1B). Kriging method 55 was used as an interpolation method to delineate the spatial distribution of temperature and precipitation and compute basin-wide averages. Data for annual precipitation and sediment discharge missing from the early 1950s (e.g., precipitation in 1950 and sediment discharge at Datong in 1950 and 1952) were reconstructed using regression relationships between precipitation and water discharge and between water and sediment discharges and using uninterrupted water discharge data 56,57 . Assumptions for differentiating the impacts of major factors on water and sediment discharges. To understand the relative importance of the factors influencing the decline of water and sediment discharges, we need to differentiate and quantify their impacts. This quantification must be based on regression equations. We made assumptions for the establishment of relevant regressions: (1) The dam's impact on downstream water discharge mainly includes water impoundment and enhanced evaporation in the reservoir; the water impoundment can be calculated using the close regression relationship between water storage and reservoir water level, and the enhanced evaporation can be determined by the increased water surface and the difference in evaporation between the land and the water surface. (2) The dam's impact on downstream sediment discharge mainly includes sediment trapping in the reservoir and reduction of the ability to transport sediment (due to decreased water discharge). The feedback of downstream sediment transport to dam operation follows hydrological principles and can be simulated using hydrological regression equations. (3) The interannual variability of water discharge in the Yangtze River is mainly affected by precipitation. For a period in which other factors are stable, there will be a close regression relationship between annual precipitation and water discharge for the entire catchment and major sub-basins. This regression equation can be employed to predict precipitation-based annual water discharges for the following periods. The difference between predicted and measured values reflects the impact of other factors. (4) The inherent mechanism by which water discharge affects sediment flux was unchanged from the pre-TGD to post-TGD period, and the pre-TGD hydrological regression is applicable in simulating sediment transport in the middle and lower reaches of the Yangtze River for the post-TGD period. These assumptions simplify the influencing factors and Scientific RepoRts | 5:12581 | DOi: 10.1038/srep12581 the regression equations lack hydrodynamic processes because it is challenging to simulate the detailed climate-hydrology-landscape interactions. For example, a change in rainfall intensity is sensitive for water discharge, but it cannot be reflected in the regression between annual precipitation and water discharge. Vegetation also changes with precipitation (not only by human activity). This natural change in vegetation also affects the water and sediment discharges. Although simplifications and assumptions must be used in this study, our regression approaches are helpful in examining the relative importance of the driving factors in this study. Quantifying the impact of the TGD on water discharge. Water impoundment/release is logically the balance between water inflow and outflow. However, data of water inflow and outflow are often not available. For example, no inflow data are available for the ungauged areas surrounding the TGR 34 . Alternatively, we use the relationship between design storage capacity and water level in the TGR to estimate the change in the water storage. The design storage capacity of the TGR at water levels of 135-175 m represents in the following relationship: , > . , < . Quantifying the impact of the TGD on sediment discharge. The methodology of Yang et al. (2014) 34 is used to estimate the impact of the TGD on sediment discharge at Yichang and Datong. The impact of the TGD on downstream sediment discharge is the difference between sediment discharge in the non-TGD case and that actually measured over the post-TGD period. As defined by Yang et al. (2007a) 58 , the non-TGD case assumes that there was no TGD operation since 2003. Because no new sedimentation was observed before 2003 along the channel where the TGR is now situated 53 , presumably due to high water velocity 51 , it was also assumed that all the sediment deposited in the TGR since 2003 would have been delivered downstream. Thus, estimating sedimentation in the TGR is a prerequisite for predicting downstream sediment discharge in the non-TGD case. We estimated sedimentation in the TGR using a sediment budget approach; i.e., the difference between sediment inflow and outflow. Sediment inflow was computed mostly at gauging stations upstream from the TGR. Sediment derived from the ungauged area between the upstream gauging stations and the downstream gauging station at Yichang was also estimated, using an empirical water discharge-sediment load model 34 . The sediment budget also took into account riverbed erosion between the TGD and Yichang 34,58 . Based on estimated sedimentation in the TGR and the measured downstream water and sediment discharge, sediment delivery along the reach between Yichang and Datong (tidal limit) was predicted using empirical correlations (each with a correlation coefficient of R 2 > 0.99 58 . Quantifying the impact of precipitation on water discharge. Based on data measured in the pre-TGD decade (1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), correlations were established between annual precipitation and water discharge for different basins: = . , = . , < . where Q (Yichang) is the water discharge at Yichang, Q (Hanjiang) is the water discharge of the Hanjiang River, Q (Four Rivers to Dongting) is the sum of water discharge from the four tributaries that join Lake Dongting, Q (Five rivers to Poyang) is the water discharge from the five tributaries that flow into Lake Poyang and Q (Datong) is the water discharge at Datong. P (Yichang) , P (Hanjiang) , P (Four rivers to Dongting) , P (Five rivers to Poyang) and P (Datong) are the corresponding basin-averaged annual precipitation totals. A total of three methods were used to predict post-TGD annual water discharge. In Method 1(Q 1 ), precipitation data measured during the post-TGD decade (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012) were used in the relationships described above to predict discharge. The difference between predicted water discharge and the pre-TGD measured water discharge reflects the impact of precipitation change. The difference between the predicted and measured post-TGD water discharge reflects the impact from other driving factors. . , = . , < . where Q (Five channels) represents water discharge from the Yangtze River main stem through five channels into Lake Dongting, Q (Chenglingji ) is water discharge at Chenglingji (the confluence of Lake Dongting with the main stem), and Q (Hankou) is water discharge at Hankou. Water discharge from ungauged areas is reflected in Equations 8-10. In Method 3 (Q 3 ), the pre-TGD and post-TGD correlations between precipitation and water discharge were compared in terms of their ability to separate the impact of non-precipitation factors from the impact of precipitation change. The results of these methods are in good agreement and their average is used to quantify the impact of the driving factors. Quantifying the impact of precipitation on sediment discharge. Correlations were established between annual sediment discharge (Q S ) and water discharge (Q) measured in the pre-TGD decade: = . , = . , = . A total of three methods were used. In Method 1 (S 1 ), precipitation-based sediment discharge was predicted using Equations 2-6 and 11-15. The difference between the predicted post-TGD sediment discharge and the measured pre-TGD sediment discharge presumably reflects the effect of precipitation change. The difference between the predicted and measured post-TGD sediment discharge reflects the impact from other influencing factors. Method 2 (S 2 ) was applied at Datong station and considers sediment exchange between the water column and the bed, and between the main river and the lakes. In Method 2 (S 2 ), precipitation-based water and sediment discharge for the four sub-basins was predicted using Equations 2-5 and 11-14. Precipitation-based sediment discharge from the main river into Lake Dongting was also predicted following a relationship determined from data measured in the pre-TGD decade: Precipitation-based sediment discharge from Lake Dongting into the Yangtze River at Chenglingji was then predicted using the following equation: where Q S (Into Dongting) represents the total sediment discharge into Dongting, and D (Dongting) is deposition in Dongting: where Q (Hankou-Datong) and Q S (Hankou-Datong) represent water and sediment inflows into the section between Hankou and Datong based on the water and sediment budget, taking into account the contributions of Lake Poyang and the ungauged area around the main river. In Method 3 (S 3 ), we compared the pre-and post-TGD correlations between sediment discharge and water dischargein terms of their ability to separate the impact of non-precipitation factors from the impact of precipitation-governed changes in water discharge. The accuracy of the above methods depends on the correlation coefficient of the regression equations. A larger correlation coefficient corresponds to a more reliable method. Because the correlation coefficients in this study are generally larger than 0.8, our identified impacts of precipitation and human activities on water and sediment discharges are reasonable. Error estimation for regression-based prediction. The error of a regression equation in prediction theoretically derives from the deviation of the data points from the regression trend line. The lower the correlation coefficient, the higher the error of the prediction. Although all the correlation coefficients in this study are high and the correlations are statistically significant (Equations, [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][19][20][21][22], errors in the prediction need to be evaluated. We used the following standard deviation to show the overall error of a regression-based prediction series: where N is the number of data points, i is the order of the data, P is the predicted value using the regression equation, and M is the measured value. In statistics, Equation 22 can be defined as the residual squared error at the 95% confidence level 56 . For example, in Supplementary Tab. S4 (online), we first established regression equations between annual precipitation and water discharge for odd-year series and even-year series from 1956 to 2002. Then we used these equations to predict the water discharges of the same series. After that, we calculated the difference between the predicted and measured water discharge for each year. Lastly, we calculated the average and standard deviation of the difference for the series. Both of the average differences are zero (Supplementary Tab. S4, online), which is because of the inherent relation of the predicted value to the measured value. The standard deviation is ± 43 km 3 / yr for the odd-year series and ± 57 km 3 /yr for the even-year series, compared with ca. 900 km 3 /yr for the multi-year average of water discharge (Supplementary Tab. S4, online). To avoid the influence of this inherent relation, we employed the cross-correlation equation of the odd-year series to predict the even-year series, and vice versa. In this case, the averages ± standard deviations of the difference between the two series became − 10± 57 km 3 /yr and 10 ± 43 km 3 /yr, respectively. That is, the absolute value of the mean difference between predicted and measured values is approximately 1% of the annual water discharge, and the standard deviation remains ca. ± 5% of the annual water discharge (Supplementary Tab. S4, online). We can therefore conclude that the influence of the inherent relation of the predicted value to the measured value is very low and can thus be neglected. We used Si to present the error of Pi, where P a is the average of the individual predicted values. Supplementary Tab. S5 (online) shows an example of the use of Equations 22 and 23. This method has been employed in multiple studies in estimating the error of predicted water and sediment discharges in rivers such as the Yangtze and Pearl Rivers 29,56,57 . In this study, we also employed it in estimating the error of regression-based predictions. The relative errors of the predicted water discharges based on precipitation-water discharge regression equations are typically < 10%, and the relative errors of the predicted sediment fluxes based on precipitation-water discharge-sediment flux regression equations are < 20%. The relative errors of the predicted TGD's impacts on downstream water and sediment discharges are typically < 10% (Supplementary Tab. S2-S5, online).
2016-05-12T22:15:10.714Z
2015-07-24T00:00:00.000
{ "year": 2015, "sha1": "d232ea09d9cf239b92144acf5a060805aa31c0a0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/srep12581", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74190af5fcb553603855ce68594ee2a84d697ace", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
119171390
pes2o/s2orc
v3-fos-license
Self-conjugate core partitions and modular forms A recent paper by Hanusa and Nath states many conjectures in the study of self-conjugate core partitions. We prove all but two of these conjectures asymptotically by number-theoretic means. We also obtain exact formulas for the number of self-conjugate t-core partitions for"small"t via explicit computations with modular forms. For instance, self-conjugate 9-core partitions are related to counting points on elliptic curves over \Q with conductor dividing 108, and self-conjugate 6-core partitions are related to the representations of integers congruent to 11 mod 24 by 3X^2 + 32Y^2 + 96Z^2, a form with finitely many (conjecturally five) exceptional integers in this arithmetic progression, by an ineffective result of Duke--Schulze-Pillot. INTRODUCTION Since the time of Young it has been known that partitions index the irreducible representations of the symmetric groups. Young and mathematicians of his time also knew that a partition could be encoded in a convenient way -via what is now known as a Young diagram -and that flipping this diagram about a natural diagonal amounted to tensoring the corresponding irreducible representation with the sign character. Hence it was deduced that the Young diagrams invariant under this flip corresponded to those irreducible representations that split upon restriction to the alternating subgroup. Some time later, it was discovered by Frame-Robinson-Thrall [6] that the hook lengths of a Young diagram determine the dimension of the corresponding irreducible representation (over C). It followed that the study of partitions with hook lengths indivisible by a given integer t -so-called t-core partitions -was connected to modular representation theory. In this paper we study self-conjugate t-core partitions, asymptotically resolving all but two conjectures posed in the paper of Hanusa and Nath [10] on counting self-conjugate t-core partitions. In all but two cases the implied constants are effective, so in principle this reduces many of these conjectures to a finite amount of computation. The ineffective cases are due to the ineffectivity of a result of Duke-Schulze-Pillot [5] on integers represented by forms in a given spinor genus, which arises due to the Landau-Siegel phenomenon. PRELIMINARIES Let λ := λ 1 ≤ · · · ≤ λ k be a partition of n. For each box b in its associated Young diagram, one defines its hook length h b by counting the number of boxes directly to its right or below it, including the box itself. The irreducible representations of the symmetric group on n letters, S n , are in explicit bijection with the partitions of n. The hook-length formula states that the irreducible representation corresponding to λ has dimension the product taken over all the boxes in the Young diagram corresponding to λ. The representations of S n can be defined over Z (i.e., can be realized as maps S n → GL d (Z)), and so one may speak of reduction modulo a prime p. From modular representation theory one then obtains the criterion that the reduced representation is again irreducible if and only if the general inequality v p (n!) ≥ v p (dim ρ λ ) is an equality, where v p is the p-adic valuation. That is, the reduction of ρ λ modulo p is irreducible if and only if none of the h b are divisible by p. This motivates the following more general definition. Definition 1. A partition λ = λ 1 ≤ · · · ≤ λ k is called t-core if none of its hook lengths is divisible by t. The study of t-core partitions goes back at least to Littlewood, who was the first to obtain the generating function for the number of t-cores of n. Recently Granville and Ono [8] have resolved precisely which n admit a t-core partition, and there has also been activity on a related conjecture of Stanton [21] on the monotonicity of t-cores in t, as well as various identities arising even in Seiberg-Witten theory involving core partitions. Most relevant to this work is the paper of Hanusa and Nath [10], which concerns selfconjugate t-cores, or partitions that are both t-core and whose Young diagram is symmetric about the natural diagonal (equivalently, those whose corresponding representation ρ λ does not remain irreducible upon restriction to the alternating subgroup A n ⊆ S n ). Hanusa and Nath state various conjectures about self-conjugate core partitions, many in direct analogy to conjectures in the study of more general core partitions. In this paper we prove all but two of these conjectures asymptotically. MAIN RESULTS Let sc t (n) denote the number of self-conjugate t-core partitions of n, and sc(n) denote the number of self-conjugate core partitions of n. By A ≪ θ B we will mean |A| ≤ C|B| for some positive constant C possibly depending on θ. By A ≍ B we will mean A ≫ B and A ≪ B, and by A ∼ B we will mean A = B(1 + o(1)). By (a, b) we will mean the greatest common divisor of a and b. For us N := Z ≥0 . The greatest integer at most x will be denoted ⌊x⌋. Finally, we will also write e(z) := e 2πiz . We begin with results on monotonicity. A conjecture of Stanton [21] on monotonicity in t of c t (n), the number of t-core partitions of n, has a natural analogue for self-conjugate partitions which we can prove asymptotically -in analogy with Anderson's result [1]. (Note that Theorem 7 proves the corresponding result for t = 4, except with ineffective implied constant.) Theorem 2 (Cf. Conjectures 1.1, 1.2 of Hanusa-Nath [10].) Let t ≥ 9 or t = 6, 8. Then: for n ≫ t 1, where the implied constant is effectively computable. In fact we will prove a slightly more precise result for "large" t. whereω h,k is a 24k-th root of unity, defined precisely in the proof. (See (41). In fact, the sum over h is a Gauss sum.) whereω h,k is a 24k-th root of unity, defined precisely in the proof. (Again, the sum over h is a Gauss sum.) The corresponding result for c t (n) was proved by Anderson [1] using the circle method. Our method is the same for t ≥ 10, except that we need to be much more explicit in order to bound the leading constants (that is, those in front of the n + t 2 −1 24 ⋆ terms -these are often called the singular series) away from 0. For smaller t we proceed by explicit computation and knowledge of the growth of Fourier coefficients of modular forms. The case t = 4 will be isolated (see Theorem 7) due to the ineffectivity of the implied constant. As a result, n ≪ sc 8 (n) ≪ n log log n. By combining the work of Anderson and Theorem 2, we also obtain the following result. Theorem 5 (Cf. Conjecture 4.1 of Hanusa-Nath [10].) Let 11 ≤ p < q be primes. Then the number of defect-zero p-blocks of A n is less than the number of defect-zero q-blocks of A n once n ≫ p,q 1, where the implied constant is effectively computable in terms of p and q. Next we move to conjectures about small t-cores. As a result, sc 6 (n) > 0 for n ≫ 1, where the implied constant is ineffective. Ineffectivity in this paper is due to the Landau-Siegel phenomenon, whereby Siegel's bound h(−D) ≫ ǫ D As a result, sc 6 (n) > sc 4 (n) for n ≫ 1, where the implied constant is ineffective. Theorems 6 and 7 follow from a computation of the genus and spinor genus of the quadratic form 3X 2 + 32Y 2 + 96Z 2 (using Magma) and results of Duke-Schulze-Pillot [5] (which rely on the subconvexity bound of Iwaniec [12] for squarefree coefficients of cusp forms of half-integral weight). Monotonicity is violated for t = 7, however. The proof is essentially one line: the integers for which sc 7 (n) = 0 are known (those for which n + 2 = 4 k · (8m + 1)), and, similarly, those for which sc 9 (n) = 0 are known (those for which 3n + 10 = 4 k ). These two sets are infinite and only have n = 2 in common. The result follows. By computations with Sage, Magma, and Mathematica, we in fact obtain the following more precise formulas for sc 7 (n) and sc 9 (n). • For n odd, • For n ≡ 0 (mod 4), • For n ≡ 2 (mod 4), writing 3n + 10 = 2 e · m with m odd, Here the a n (E) are the coefficients appearing in the Dirichlet series for the L-function of the elliptic curve E. The curve 36a is y 2 = x 3 + 1, the curve 108a is y 2 = x 3 + 4, and the curve 54a is Theorem 9 explains the prevalence of integers congruent to 82 mod 128 appearing in the numerics of Hanusa-Nath [10]: 3 · 82 + 10 = 256. In fact, looking more closely, the integers n for which sc 9 (n) < sc 7 (n) that they found all satisfy 3n + 10 = 2 e · m with m small and e large. In any case, by the Hasse bound, we see that, for n ≡ 2 (mod 4), From this estimate and an elementary construction we see that no inequality of the form sc 9 (n) ≫ sc 9 (⌊n/4⌋) could possibly hold. Finally, we prove that the proportion of self-conjugate t-cores to self-conjugate partitions tends to 1 if t grows linearly with n, in analogy with a result of Craven [4] to the same effect for t-cores proper. PROOFS All of the arguments begin from the determination of the generating function for sc t (n), obtained by Olsson [17] and Garvan, Kim, and Stanton [7]. Write with q := e(z). Write for the Dedekind eta function. Theorem 13. • For t even, • For t odd, Hence we see the generating functions are essentially eta products, of weights t 4 and t−1 4 in the cases of t even and t odd, respectively. These are holomorphic at all cusps, as the following general theorem about eta products (see [14]) shows. That the products are holomorphic inside the upper half-plane is immediate from the infinite product representations. In this case this amounts to the inequalities for t even, and for t odd, which both hold by inspection (the expressions are smallest when c and t share no odd prime factor, then split into cases based on c modulo 4). We also need explicit formulas for the multiplier systems of the eta and theta functions (with θ(z) := n∈Z q n 2 ), which can be found in Knopp [13] (except for a missing factor of 2 in the formula for v θ ), and are originally due to Petersson [18]. Before stating the formulas, we set the following notation. For d odd, let where c d is the usual Jacobi symbol, and, for c odd, let . Then: and, if γ ∈ Γ 0 (4) (that is to say, c ≡ 0 mod 4), where we take the principal branch of the square root. More specifically, we have the following formulas for v η and v θ . • The multiplier system of Dedekind's eta function is given by if c is odd. • The multiplier system of the classical theta function is given by (remember With this established, we may begin the arguments. Proof of Theorems 2, 3, and 4. 4.1.1. Small t. First, we handle the cases of small t -the circle method will only tell us something for t ≥ 10. We will see that sc 6 (n) is proportional to the number of representations of 24n + 35 by the form 3X 2 +32Y 2 +96Z 2 . By Siegel's mass formula this is, to leading order, proportional to a class number, which is bounded above by ≪ n 1 2 log n. Hence Since the generating function for sc 8 we have that a shift of the generating function for the triangular numbers. Hence Now 8n + 21 ≡ 5 (mod 8), so that if 8n + 21 = x 2 + y 2 + 2z 2 + 8w 2 , without loss of generality we may take x odd and y even. By considering this equality modulo 8, we see that 4 does not divide y, and hence 2 divides z. Thus the representations of 8n + 21 by the form X 2 + Y 2 + 2Z 2 + 8W 2 are equinumerous (modulo switching X and Y ) with the representations by X 2 + 4Y 2 + 8Z 2 + 8W 2 . The former is a universal form (as may be easily checked by the Fifteen Theorem [3], or looked up in Ramanujan's table of universal diagonal forms [19]), and the number of representations of an integer N lies between ≫ N and ≪ N log log N . Since sc 8 (n) is then (up to flipping signs) the number of representations of 8n + 21 by a universal form, we obtain the bounds n ≪ sc 8 (n) ≪ n log log n. (Alternatively, by a result of Shimura [20] the theta function of the form is a modular form of weight 2 and level 8 with trivial nebentypus, and there are no cusp forms in M 2 (Γ 0 (8)).) The case t = 6 is proved. We also have the upper bound Finally, we will see in Theorem 10 that the same upper bound holds for sc 9 (n). Now we will apply the circle method. the generating function for the partition function p(n). The crux of our calculation is the use of the following transformation formulas. The first is obtained using a transformation formula for the eta function involving a Dedekind sum -see e.g. Apostol [2] -and the second is obtained using a transformation formula for the eta function involving Jacobi symbols -see e.g. Knopp [13]. Let z ∈ C be such that Re z > 0. Then: where is a Dedekind sum. Equivalently, Write a 24k-th root of unity. From this calculation we obtain the following transformation formulas for the F t . • For even t, let h (1) , h (2) , h (3) , h (4) ∈ Z be such that and Then: Then: The point of such a formula is to move the argument of P from near the unit circle to near zero (that is, for |z| small), where P (0) = 1 gives us total control over the singularities at the roots of unity. The rest of the calculation follows Anderson rather closely. Let N ∈ Z + , and 0 < R < 1. We take N ≍ √ n and R = e −2π/n . Of course where ∆ R ⊆ C is the disk of radius R. Now for consecutive Farey fractions of order N (so that k 1 , k, k 2 ≤ N ), write The Writing R =: e −2πǫ and z := ǫ − iθ (we will take ǫ = 1 n ), we see that We first do the case of even t. In this case, by the transformation formula, we see that where and As suggested by the naming, M will be the main term, and E 1 and E 2 will be error terms, at least for t ≥ 10. Let us first calculate M . Note that, on choosing the principal branch of the logarithm on C − R − (the complex plane without the nonpositive reals), where the first integral is over the described contour, with the caveat that the contour does not intersect the nonpositive reals. This path is often called Hankel's contour, since such an integral calculates the gamma function by Hankel's formula. Namely, this becomes Hence it suffices to bound these two integrals and the E i . The integrals pose no problem. Namely, bounding trivially (i.e., via the triangle inequality), (Here A ∝ B means A = cB for some constant c. That is, A is proportional to B.) Since by definition, we have the estimate (using N ≍ √ n and ǫ −1 = n) The same holds for the other integral (by Schwarz reflection or repeated effort). Thus we see that where we have used the trivial (and suboptimal) bound Next, since all series converge absolutely in the disk, we have the general estimate obtained by expanding out the relevant series in q. Thus for example Now we turn to E 2 . We only need that the above bound is ≪ t 1. Namely, again bounding trivially, Now, if (t, k) = 1, then (remember t is even!) Also, Hence Note that πǫ 12k 2 (ǫ 2 + y 2 ) e − πǫ 48k 2 (ǫ 2 +y 2 ) ≪ 1, since the map x → xe −x is uniformly bounded on R + . Also, the length of the integral is Hence Finally, we turn to bounding E 1 . Again bounding trivially (using our "general bound") as before. Observing that the difference between the sum with k ≤ N and the sum in the theorem statement is we obtain the first claimed equality for even t ≥ 10. To show the asymptotic claim, write Observe that Thus the asymptotic claim follows, and so we have the full theorem for even t ≥ 10. We will have to do quite a bit more work in the odd case for t = 11, but t ≥ 13 will follow similarly. 4.1.3. The circle method: odd t. Things are more complicated in bounding the corresponding C t (n) for odd t, essentially because our eta products do not vanish when 4|k and (k, t) = 1, so there are more terms in the defining sum. But the circle method argument is entirely the same. The only input is the fact that if (t, k) = 1 or k ≡ 2 mod 4, and it is zero otherwise. Thus, following the exact same argument as above, we obtain Write, again, For t ≥ 13, since ϕ(k) ≤ k/2 for even k, we have that Unfortunately a similar argument does not work for t = 11. So instead we present in the next subsection a calculation that gives completing the proof. 4.1.4. Controlling the singular series C 11 (n). Here t will be odd, and soon we will take t = 11 explicitly. We will realize the sums over h as Gauss sums. To do this, we will need Petersson's more explicit transformation formula for the eta function, mentioned above (see Theorem 15). For odd k, let h (6) ∈ Z be such that 4thh (6) ≡ −1 (mod k), and write so that Then Petersson's formula tells us that (after much cancellation -implicitly we use that (t, 6) = 1, so that t 2 ≡ 1 mod 24, which of course holds in our case) where the term −2th k is the usual Jacobi symbol. Hence the sum over odd k in C t (n) is, for t = 11, a Dirichlet series of Gauss sums. Similarly, in the case of 4|k (and (t, k) = 1), let h (3) ∈ Z be such that thh (3) ≡ −1 (mod k). Write so that Then, applying our transformation formulas with these h (i) , after a great deal of cancellation we see that, for t = 11, 8|k . (113) Thus the sum over even k in C 11 (n) can be written (splitting into a sum over odd k and e ≥ 2 via replacing k by 2 e k) We can evaluate Gauss sums (or, perhaps more correctly, "twisted Ramanujan sums") exactly (see Montgomery-Vaughan [16] Theorem 9.12). Theorem 18. Let χ be a Dirichlet character of conductor d|q, and let χ * be the corresponding primitive character inducing χ. Then: where τ (χ * ) is the Gauss sum corresponding to χ * , of absolute value √ d if χ * is nonprincipal, and µ is the usual Mobius function. So, for odd k, since · k is primitive modulo the squarefree part of k (which we will denote k/ , where is the largest square dividing k), we have that Next we turn to the even Gauss sums. Since 8k · is primitive modulo 8 k (again k is odd), we see that, writing gcd := (n + 5, k), where the first equality follows from considering h → h + 4k in Z/8kZ -the summand picks up a minus sign from each term, and so does not change. Similarly, for e > 2, since 2 e+1 k · is primitive modulo k ·2 r := k ·      1 e odd, k ≡ 1 (mod 4), 4 e odd, k ≡ 3 (mod 4), 8 e even, we see that, writing gcd := (n + 5 + 2 e−3 k, 2 e k), As horrible and unweildy as these formulas may look, the essential observation is that their absolute values are (almost) multiplicative in k (that is, the absolute value of the term corresponding to kℓ is the product of those corresponding to k and ℓ if (k, ℓ) = 1). Namely, for k odd, writing k =: vp(k) odd p · p p ep with each e p ∈ 2Z and v p (·) the p-adic valuation (so that the second term is precisely what we have been calling ) and gcd := (n + 5, k), we have the following formulas. First, This is multiplicative in k. Next, if e > 2 and e is odd (gcd = (n + 5, k) still), then This is not multiplicative in k, but, by weakening conditions on being zero a bit and factoring out the terms depending only on e, we can bound it above by something that is. Namely, If e > 2 is even, This is multiplicative in k once we factor out the terms depending only on e. Finally, for e = 2, Note that the right-hand side is the same result as setting e = 2 in the e > 2, e even formula. In particular this is also multiplicative in k once we factor out the terms depending only on e. The formulas may look horrendous, but we are about to apply them for prime powers only (thanks to multiplicativity), where they become rather simple. For instance, the sum over odd k (so e = 0) in |C t (n) − 1| is at most This ends up simplifying to The same holds for the other sums, too. That is, the sum over e > 0 is bounded above by vp(n+5)+2 e=2,e even where by the condition "⋆" we mean: That is, it is bounded above by 4.3. Proof of Theorem 6. The generating function for 6-cores is n≥0 The second factor is the generating function where c 3 (n) denotes the number of 3-cores of n. By an identity of Jacobi (see e.g. [9]) the coefficients are known: Note that the right-hand side is a multiplicative function of 3n + 1. On prime powers p k with p ≡ 1 (mod 3) it takes the value k + 1, and on prime powers p k with p ≡ 2 (mod 3) it takes the values 0 or 1 according to whether k is odd or even, respectively. By classical algebraic number theory, this is exactly half of the number of representations of 3n + 1 by the form X 2 + 3Y 2 . That is, or n≥0 sc 6 (n)q 24n+35 = 1 2 n≥0 #|{n = 3(2a + 1) 2 + 32b 2 + 96c 2 , a ≥ 0}| · q n . Hence we obtain the claimed formula. A computation in Magma shows that the spinor genus of the form 3X 2 + 32Y 2 + 96Z 2 coincides with its genus. By a theorem of Duke-Schulze-Pillot [5] this gives the ineffective claim about positivity of sc 6 (n), since the integers 24n + 35 are locally represented by this form. Proof of Theorem 7. We have already seen that (ineffectively) The generating function for 4-cores is Of course if 8n + 5 = a 2 + b 2 , without loss of generality a is odd and b ≡ 2 (mod 4), so we see that Writing 8n + 5 =: p p ep , we know that the right-hand side is precisely p≡1 mod 4 (e p + 1), or 0 if there is a p ≡ 3 mod 4 with e p odd. Proof of Theorem 9. Let Then, for γ =: a b c d ∈ Γ 0 (28), a calculation with the multplier systems for the eta and theta functions shows that Hence G t is a modular form of weight 3/2 of level 28 and nebentypus character χ 7 : . A paper of Lehman [15] lists the ternary quadratic forms of level 28 and discriminant 7 -by a theorem of Shimura [20] these forms have associated theta functions of weight 3/2, level 28, and nebentypus χ 7 as well. The forms are and A computation in Sage shows that the theta functions associated to these quadratic forms form a basis for the four-dimensional space of modular forms of weight 3/2, level 28, and nebentypus χ 7 . Using Sage to express G t in terms of this basis gives the claimed formula. Remark 19. In fact a finite computation in Sage does amount to a proof of the equality for all n, since we can easily check that G t and the sum above have q-expansions agreeing well past the Sturm bound, which is smaller than 100 in all cases. Hence, since both sides are modular, they must agree for all n (the point is that the space is finite-dimensional). Note that the same remark applies for the following subsection as well. Now, writing N X =: 2 a · 5 2 · 7 2 · N ′ X (so that a = 0 or 1), Also, by the prime number theorem (recall 3n X + 10 = N X ), log n X ∼ log N X ∼ X, so that this lower bound is σ(N X ) ≫ N X log log n X . 4.9. Proof of Theorem 12. The following argument is in exact analogy with that of Craven [4] for c t (n).
2014-03-04T13:33:53.000Z
2013-06-29T00:00:00.000
{ "year": 2014, "sha1": "c4af389f4886d6beba729f1f4e55cd1be6337004", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1016/j.jnt.2014.01.010", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "7984312a0f61da1b51913a8b4d0247185a2aa444", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
246842400
pes2o/s2orc
v3-fos-license
The Influence of Cultural Learning on Second Language Learning Culture is important for language learning, but culture in language education has not been given much attention in China. The purpose of this paper is to discuss the inseparable relationship between culture and language, and to explore how cultural education plays a role in China's language teaching system. The data collected through questionnaires and interviews in this paper suggest that Chinese university students still place a relatively high value on culture and recognize the importance of cultural education for language learning INTRODUCTION As a result of globalization, the importance of understanding culture in language learning is becoming apparent and this is causing concern for teachers who teach languages and students who learn languages. In China, students are used to blindly following the teacher's mechanical system of grammar, reading, listening and grammar training day in and day out. The teachers' aims are also aligned with the students' demands, and language learning is ultimately about passing exams and going on to higher education. When a language is taught for its educational value, it is significant to understand the cultural content associated with that language [1]. Therefore, in the context of this worldwide trend towards cross-cultural communication and in the context of exam-oriented education in China, we believe that it is essential to integrate more cultural knowledge in the second language classroom. This paper will discuss the inseparable relationship between culture and language and to explore how cultural education can play a role in China's language teaching system. There has been much debate about the integration of culture into the school curriculum. In this paper we review the existing literature in order to build on it and conduct further research. This paper will examine the relationship between language and culture by collecting data from foreign language university students through interviews and questionnaires and then, using several models and theories, analyzing the results. Data will be collected and literature reviewed in order to identify the influence of culture in second language acquisition and, on this basis, to provide some suggestions for learning and teaching. Research on cultural teaching in western Second language Teaching From the perspective of the development of western language teaching, cultural teaching is closely related to language teaching. As early as the middle age, European language teaching has been universal culture teaching, to introduce students to the history and geography of the Roman Empire. Since the 19th century, many linguists have realized the influence of cultural factors on language acquisition and the close relationship between language and culture. After investigating the relationship between language and national culture, customs and beliefs, Sapir, an American linguist, concludes that "language has a base... Language does not exist without culture, that is to say, it does not pass down from society..." [2]. German linguist Humboldt (Humboldt, B.V.W.) believes that a nation's language and thought are inseparable, each nation will inevitably put some unique cultural awareness into their own language, and in their own language form a special kind of "world view", the "world view" activities, in turn, will restrict people's words [3]. With the outbreak of the Second World War, people pay more attention to the cultural differences between different countries and regions, and the studys of sociolinguistics, pragmatics and cross-cultural communication theory develop rapidly. Many scholars have also proposed that foreign language should no longer be an independent discipline, but should be combined with political science, history, geography and literature of a particular region to form an interdisciplinary group [4]. Seen from the standards for Foreign Language Teaching in the 21st Century issued by the US Department of Education in 1996, cultural teaching covers all the standards in the syllabus and becomes the central task of language teaching [5]. From the current research situation, The Concept of Cultural Teaching Practice by Patrick R.M, an American scholar, is a relatively authoritative book that systematically discusses language and culture teaching, and makes a comprehensive and profound summary and study of the basic theories [6]. In addition, many scholars have proposed a multi-faceted and specific approach to teach language and culture, which has been gradually systematised. Cultural studies in Second language Teaching in China Cultural teaching in China starts late, and a majority of it are to directly absorb western research results. In the actual teaching process, teachers are relatively cautious and not systematic, and there is a lack of corresponding teaching materials covering intercultural communication. The attention of the Chinese language and culture teaching originated in the eighty s of the 20th century, Mr. Xu Guozhang's speech on the Cultural Connotation of Words and English Teaching [7] is a sign that Chinese people begin to attach importance to cultural factors in foreign language teaching (Hu Wenzhong). Prior to this, China had mainly translated a large number of foreign classics and studied foreign works, such as Hall's The Silent Language [8] and Intercultural Communication and Learning [9] (Hu Wenzhong). Chinese scholars have concentrated on integrating a great deal of theoretical and practical knowledge of cultural factors, producing a series of specific approaches to teaching foreign language and culture suitable for China, such as Chen Shen's Language and Culture Teaching Strategies [10] and Hu Wenzhong and Gao Yihong's Foreign Language Teaching and Culture [11]. But on the whole, the teaching theories of foreign language culture in China need to be enriched, and the professional practice of teaching methods need to be improved. METHOD Research has provided adequate evidence for us to analyze a variety of functions of culture during the process of second language acquisition. Considering the pertinence of the survey, students who major in foreign language are the best choice, for they have common goal which is gaining a job related to their second language. Linguistically, error analysis and acculturation model will help to parse our data. Participants The key research question of our study is how cultural factors influence second language acquisition in classes. To analyze this question, participants consist of undergraduate and graduate students, and present different versions of questionnaire and interview. Among them, some undergraduate students are participating the 2+2 joint tertiary study programs, while graduate students have the experience of studying abroad or the goal of working abroad or in multinational corporation. Acculturation Acculturation refers to changes that arise from sustained first-hand contact between individuals of differing cultural origins [12]. Using the "Acculturation model" to frame the study can better explain what the respondents said. What is mentioned most by undergraduate students is the exam-oriented education before college, causing rare exposure to cultural learning. When they have more and more classes related to culture, culture shock happens, especially to those who join the 2+2 study programs. Students who have studied abroad explain they can maintain contact with the people of targeted language society and get more linguistic as well as cultural knowledge. In this process, they realize that some of their behaviors gradually change, which means that his original culture is changed by the influence of other cultures. from data, all mention that acculturation is an indispensable step for future employment and crosscultural communication. Procedure We send questionnaires to undergraduate and graduate students majoring in foreign language, and select special participants to have interview. Participants are required to express their views toward the ideas mentioned in each item by choosing the point on the scale, which is reflected their level of agreement. Also, they can write down their own opinions in the shortanswer questions. Privacy will be assured. Respondents can also update their ideas after the research. All the answers are then collected to be analyzed by the research team. QUESTIONNAIRES Adapted from WeChat English culture learning and its impact on language learning, the ten-item questionnaire utilized in this study aims to assess these themes: (1) Target language learning includes cultural learning (Item 1,2), (2) The importance of cultural learning in language learning (Item 3,4,5,6,7) and (3) The position of cultural learning in language learning. (Item 8). The first theme of the questionnaire aims to investigate whether the teaching of the target language includes cultural teaching. The second theme is concerned with the attitude change in the participants towards the target culture. The last theme assesses the status of target language cultural learning in China. Theme 1: Target language learning includes cultural learning textbooks or courses are becoming increasingly abundant and diverse, in 2 table below, the participants expressed that cultural content is common in foreign language learning. Not only do almost all the textbooks contain cultural content, but also the types are very rich. Among them, living habits account for the largest proportion, as high as 86.89%. Other types of cultural content also almost exceed 50%. Theme 2: The importance of cultural learning in language learning. We are concerned about this topic. Therefore, we have designed many related questions, and the results obtained are more in line with our expectations. In the minds of the interviewees, cultural learning is still very important to their target language. No 0 More than 95% of the participants clearly state that cultural learning is helpful to language learning, and everyone who participates in this questionnaire believes that foreign language and cultural learning is necessary. It is the only option in our questionnaire that does not have a candidate. Compared with the previous ones, the last topic of our questionnaire is relatively simple with a subjective question. However, this is enough to reflect some facts: most of the language culture content is still insufficient. Advances in Social Science, Education and Humanities Research, volume 637 Do you think that in the context of Chinese language education, cultural learning is necessary for the study of its language and why? [Short answer] The last item in this questionnaire is the only open item. When we ask them whether they find it necessary to strengthen the importance of cultural learning in their language learning. The answer can be divided into six points. Next, we'll deal with these points and discuss them with some participants' answers in the introduction respectively. Point 1: Language is a part of culture, and language cannot exist independently without culture. I think it is necessary. Language is a part of a nation's culture and a way of disseminating culture. After the introduction of the Direct Method into the English Language teaching, cultural elements begin to be regarded as an important aspect of language learning. Nowadays, cultural background knowledge is accepted as a necessary place for language teaching. As Edward Sapir (Language) points, there is something behind the language and language cannot exist without culture. The so-called culture is the summation of inherited habit and belief, which can determine our living organization. L.P.Palmer states the history of language and the language of history are complement to each other. They can assist and enlighten each other. Language does not exist in a vacuum so that language learners should understand the background, they should also learn about the target culture. In this respect, Crystal well supports this statement: "Language has no independent existence: it exists only in the brains and mouths and ears and hands and eyes of its user" [13]. Some participants in this questionnaire are clearly aware of the importance of culture and believe that culture and language are inseparable. Point 2: Cultural learning contributes to its language learning Knowing the cultural background can make people understand the language better. The main problem in teaching culture in the foreign language classroom is on the uncertainty of the meaning of the culture itself (Furstenberg, 2010). Although there have been teaching activities which are devoted to culture teaching and textbook materials have included the cultural dimensions, there is still a constant need to redefine the concept of culture that is meaningful in language classroom, which is the first place that students encounter with another language [14]. Therefore, whether in China or in other countries, cultural teaching has always been full of uncertainty. However, the process of language learning is a relatively abstract concept. Many interviewees said that cultural education is more like a background introduction for them. This is a process of concretizing the original abstract concept, and it will gradually make them form a picture in their mind. For example, the word "renaissance", we all know that it has the meaning of revival, regeneration. But if you know something about the Renaissance, you will immediately realize that renaissance can be used as a proper term, referring to the Renaissance. Or when the teacher introduces relevant content about Renaissance, you can quickly accept this concept and visualize in your mind. We also mention that however course books provide real life situations, learners, lacking insights about the target culture, have difficulty in associating these situations with real people [15]. Participants say that culture learning is an extremely efficient way of learning, so they regard cultural learning as a tool and method for learning the target language, so that they can better master it. Point 3: Mastering the grammatical framework does not mean mastering the language. The importance is not the grammatical structure but the change in the way of thinking. While discussing the language proficiency, linguists often make a distinction between linguistic competence and linguistic performance. Chomsky defines linguistic competence as what one knows about the language while linguistic performance is one's actual language use [15]. Many of our participants say that in addition to being able to take exams, the biggest purpose of their language learning is to communicate. The ability of communicating is very important in two directions. One is the language itself, that is, its structure, grammar, etc. And the other is the way of expression, cultural competence falls in the category of expression. This likes the learner thinking from the cultural perspective of the target language and expressing it according to the native speaker's thinking mode. This thinking mode is influenced by many aspects, such as religion, customs, social structure, etc., which are all covered by culture. Lado argues that lack of cultural competence in the target language will surely lead to transfer from the native language to the target language [5]. This is why there is such a saying as Chinglish, which is imposing the Chinese model on English, this kind of English can be understood by Chinese people as soon as they listen, but native English speakers generally have to guess. Therefore, even though the grammatical structure may be correct, the locals cannot understand what it means. The participants here are clear that being more exposed to cultural learning can enhance their communication skills. Point 4: Cultural teaching does not fit the Chinese education system I do not think it is necessary, it may be more important for students who need to go abroad. As the exchanges between countries around the world become closer and closer, in order to adapt to the development of globalization, many students choose to study abroad, but it is obvious that a larger part of the students do not consider this path. Therefore, for them, even if they do not deny the importance of culture to language learning, Advances in Social Science, Education and Humanities Research, volume 637 they still think it is unnecessary. As far as the current situation is concerned, China's education model is still relatively mechanical, so does language teaching. Teachers are accustomed to strengthening students' grammar system, that they teach grammatical rules over and over again, and students do it over and over again. Of course, both the content of the textbook and the teacher's words will involve some cultural knowledge, but we are more accustomed to using it as a moderator, which is an additional content that is free from rigid courses. So, in the end we will return to the environment of the Chinese education system. It is precisely because of this restriction that the participants have voices of opposition. CONCLUSION Cultural learning is to deepen the understanding of the language, which is very helpful to language learning. Even if the cultural content involved in our textbooks or teaching is not sufficient, in fact, many students regard cultural learning as an important part of language learning. And they also use their own methods to absorb cultural knowledge. The results of our questionnaire have already shown that people still value cultural learning. Cultural learning and language learning are inseparable and interacted. Cultural learning is helpful to our language learning, and can strengthen the expressive ability as well. But if it is just to pass the exam, it is another matter. Therefore, according to the current situation, there is no doubt that cultural learning is necessary. But if it is under China's education system, as the results of this article, there is still no conclusive conclusion.
2022-02-16T16:17:32.992Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "05dbff782c2a7eea0f6ceedeb3bc3af40d902844", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125969911.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4e83149a7e93350082db2650b5d598464f6ee89c", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
211096424
pes2o/s2orc
v3-fos-license
The technique of automated applying of polymer coatings used for repair of tractor parts Restoration of case parts of machines significantly reduces the cost of their repairing. The present paper proposes a method for restoring case parts using the repair size method. Depending on wear degree, the fitment holes are bored to three repair sizes. The prepared hole is inserted with a new bearing, the outer ring of which is covered with a polymeric coating of the appropriate thickness ensuring the fixity of the joint. As a result of theoretical studies, the model was obtained for the formation of a uniform polymer coating on the outer surface of a rotating cylindrical part. The facility and technological equipment were developed for applying a polymer coating of F-40C elastomer solution onto the outer surface of the rotating bearing. The facility includes screwing lathe 1K62, centering drift pin, bath and magnetic holder MV-V. The holder is designed for mounting a bath filled with polymer solution and its plunging movement when rolling bearings are immersed in polymer solution. The drift pin serves for centering and assembling bearings. The assembled drift pin is inserted into the chuck of the machine rotating the pin when applying polymer coating. The screwing lathe 1K62 is equipped with an INNOVERT frequency converter of H3400A05D5K type, which allows adjusting infinitely the spindle speed from 0 min-1. The paper presents the results of experimental studies on the F-40C elastomer shrinkage, the parameters of the mode of immersing parts into its solution, the dependence of the geometric parameters of the formed polymer coating on the components of application and adhesion modes for polymer coatings based on F-40C elastomer solutions of different viscosity. The acceptable thickness of the polymer coating of elastomer F-40C on the bearing 209, ensuring the non-failure operation of the restored seating with a cyclic radial load of 20 kN is 0.1 mm. Introduction Case parts are basic durability components determining the longevity of the machine. The wear of fitment holes in a gear-box casing leads to the misalignment of bearing rings, shafts with gears, their increased wear and sharp resource reduction. Fitment holes of case parts are restored by installing an additional part, electric arc welding, electric contact welding of steel tape, ironing or chrome plating [1][2][3][4]. However, these methods do not provide fretting resistance [1][2] of repaired fitment holes. Restoration ways with applying of polymeric materials eliminate fretting corrosion and increase the life of case parts and bearing units [5][6][7][8]. The development of the domestic chemical industry is accompanied by the permanent release of new promising polymeric materials for various purposes. Favorable environment is being formed for the development of high-performance restoration technologies providing the increase in the life of case parts, reliability of agricultural equipment and reduction of technical service cost. The methods of restoring fitment holes with the use of polymeric materials [5][6][7][8] are defined by simplicity and low cost, exclude the occurrence of fretting corrosion, increase the life of case parts and bearing units. The analysis showed that the most technologically advanced method of restoration is the application of a polymer coating on the worn surface of fitment holes. However, known technologies imply the manual performing of this operation and do not exclude shrinkage of the polymer material during curing, which affects the dimensional accuracy of the restored holes. It is proposed to restore the fitment for bearings in case parts using the repair size method. Depending on wear, the fitment holes are bored to three repair sizes where Drep1, Drep2, Drep3 are for first, second and third repair sizes; Dnom -nominal diameter of a fitment hole. The prepared hole is inserted with a new bearing, the outer ring of which is covered with a polymer coating from solution of elastomer F-40C of appropriate thickness, ensuring the fixity of the joint. As a result of the analysis of automated methods, it is proposed to apply a coating by dipping a bearing in a bath filled with solution of polymeric material. The coating should be formed uniformly due to the rotation of the bearing and the flow of the polymer solution under the action of gravity. As a result of theoretical studies [9][10][11], the authors prepared a model for forming a polymer coating on the surface of the outer ring of a rotating bearing. For the technology of automated coating on the surface of the outer rings of bearings, new tooling is needed, as well as comprehensive experimental studies. The purpose of the research is to develop tooling and execute experimental studies of automated coating of rolling bearings with F-40C elastomer solution. Materials and methods To study the shrinkage of polymeric coatings, elastomer F-40C solutions with the following viscosity parameters were used: v = 3157; 329 and 160 mm 2 /s. The viscosity was provided by adding acetone to the concentrated F-40C elastomer solution. The viscosity of F-40C elastomer solution v = 3157 mm 2 /s was controlled by a viscometer VNZH (GOST 10028-81E), v = 329 and 160 mm 2 /s by a viscometer VPZH-2 (GOST 10028-81). Cured films applied on a bedding of fluoropolymer served as samples. The films were rectangular, 60x15 mm. The thickness of films was measured three times with an indicating snap gauge SRP-25 (GOST 11098-75). The shrinkage T of the material was calculated by the formula: where hp and hfilm are the thickness of a polymer film before and after curing, mm. To study the parameters of the mode of immersing parts in solution of elastomer F-40C, the laboratory facility was elaborated, Fig. 1. It was mounted on the basis of the lathe machine 1K62 and includes a centering drift pin 1, a bath for polymer solution 2 and a magnetic stand 3. Magnetic holder MV-V is designed to fix the bath and move it along the height when immersing the bearings into the polymer solution. The centering drift pin serves for centering and assembling bearings, fig. 2 Lathe machine 1K62 is equipped with the common industrial frequency converter EI-7011, which enables the infinite adjusting of the rotation speed from 0 min -1 . The number of revolutions of bearings in the bath filled with solution was Nrev = 1; 2; 3; 4; 5 and 6. After lowering the bath, the bearings were rotating for 10 minutes at a temperature of 23 °C in order to polymerize the coating. After processing, they were kept for 24 hours at the temperature of 23 °C, and afterwards the diameter of the polymer coating and its average value were calculated using indicating snap gauge СР-100 (GOST 11098-75). When the dependence of the coating thickness on the number of applied layers of F-40C elastomer solution was under the study, films 60x15 mm in size served as samples. The number of layers was 3-4. The thickness of films was measured three times using an indicating snap gauge SRP-25. Studies of the dependence of the coating thickness on the rotational speed of the bearing n were carried out on the laboratory stand (Fig. 1) Under the given frequency of rotation, bearings were immersed in the bath filled with solution of F-40C elastomer. The bath was lowered after three turns. The rotation of bearings continued for 10 minutes at the temperature of 23 °C in order to polymerize the coating. To calculate the out-of-roundness, the diameter was measured in mutually perpendicular planes, to estimate the tapering -it was measured in width at the start and end of polymer coating. Threefold measurements were with averaging values. The adhesive properties of the material were evaluated by the bond strength with the metal during the ply separation of samples (GOST 21981-76). The coating of elastomer F-40C with viscosity of v = 3157; 329 and 160 mm 2 /s was applied. The strain of the polymer coating ply separating from the metal bedding was determined using a tensile testing machine IR5047-50. The study of the durability of bearings fits of case parts restored with the use of elastomer F-40C was performed on a vibration table. The load on the bearings 209 was 20 kN. The operating time before the bearing outer ring was shifted inside the fitment hole was taken for the criterion of durability. Results and discussion Studies showed that the minimum shrinkage T = 7% has a coating from F-40C elastomer solution with viscosity v = 3157 mm 2 /s, fig. 3. With a decrease in the viscosity of F-40C elastomer solution, the shrinkage of the polymer coating after curing increases to 3.43 times, which is explained by the large amount of acetone evaporating from the solution. At the minimum rotational speed of bearings n = 1; 2 and 3.5 min -1 , the polymer solution of viscosity 3157; 329 and 160 mm 2 /s respectively, partially drained from the surface of the outer ring. This is due to the fact that the geometric head pressure of upper layers of the liquid flow is much higher than that of the free-flow movement of underlying layers of the liquid flow, Fig. 6. Therefore, the thickness of the formed coatings is minimal: 0.13; 0.17 and 0.26 mm. With increasing rotation frequency of the bearing, the thickness increases. Figure 6. Dependence of the coating thickness hn on the rotational speed of the bearing n: 1) when viscosity is v = 329 mm 2 /s; 2) when viscosity is v = 160 mm 2 /s; 3) when viscosity is v = 3157 mm 2 /s. The optimum rotation frequency of the bearing 209, which induces the formation of polymer coating of maximum thickness, is 3.0 min -1 ; 329 mm 2 /s -6.5 and 160 mm 2 /s -7.5 min -1 during applying polymer solution with the viscosity of 3157 mm 2 /s. The further increase in the rotation frequency of bearing forwards the stabilizing of thickness of the polymer coating. Adhesion is determined by wettability, which largely depends on the viscosity of adhesion. Therefore, to ensure a high value of this indicator, it is recommended to apply coatings from an F-40C elastomer solution with a viscosity of v = 160 mm 2 /s, fig. 7. Conclusion The facility for the automated applying of polymer coatings on outer rings of rolling bearings was developed. To apply polymer coating of bearings, it is recommended to use elastomer solution F-40C with the viscosity of v=160 mm 2 /s, which ensures high adhesion of coating F = 10.0 kN/m and the coating thickness of up to 0.52 mm. As the result of the study, the authors defined the appropriate mode of applying coatings from F-40C elastomer solution: the number of revolutions of a bearing in a bath filled with polymer solution is Nrev = 3, rotational speed of the bearing is 209 n = 7.5 min -1 . Bearing fits of case parts coated with F-40C elastomer are highly durable. Fitment holes of case parts having diametrical wear of up to 0.2 mm are recommended to be restored.
2020-01-02T21:57:02.285Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "bfeed23f9459acafaea25df3c2c02c9238e3fb76", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/403/1/012011", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "66c58c283a13ad1bd06601a3df8c5cb3047a4593", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
248644585
pes2o/s2orc
v3-fos-license
Large Event Halls Evacuation using an Agent-Based Modeling Approach The paper explores the usage of agent-based modeling in the context of large event halls evacuation during music festivals and cultural events. An agent-based model is created in NetLogo 6.2.2 for better representing the human behavior when involved in such situations. A series of characteristics have been set for the agents in order to preserve their heterogeneity in terms of speed, age, locomotion impairment, familiarity with the environment, evacuating with another person, choosing the closest exit or not, selecting the closest path to the exits. An “adapted cone exit” approach has been proposed in the paper in order to facilitate the guidance of the agents in the agent-based model to the closest exit and its advantages have been proved in comparison with the classical “cone exit” approach. Different evacuation scenarios have been simulated and analyzed for better observing the capabilities of evacuation modeling in the case of evacuation emergencies. Besides the overall evacuation time, an average evacuation time has been determined for the agents based on the individual evacuation time, which can be easily connected with a risk indicator associated to each situation. Due to the visual interface offered by the agent-based model, coupled with the evacuation indicators, the proposed model can allow the identification of the main factors that may contribute to a prolonged evacuation process (e.g. overcrowding at one of the exits, not choosing the appropriate door, evacuating with a friend/parent) and the potential measures to be considered for insuring a safe evacuation process. I. INTRODUCTION Evacuation in a timely manner and in compliance with safety regulations is of great importance when it comes to an unforeseen situation [1]. In general, emergency evacuation can be considered as a traditional problem of identifying the optimal route. Numerous classical algorithms have been proposed to solve this problem, for example the Depth-First-Search algorithm [2], dynamic programming [3], Dijkstra's algorithm [4] and ant colony optimization algorithms [5]. However, evacuating a mass of people is a complicated process. The algorithms mentioned above do not take into account psychological factors, particularities of the population (gender, age, locomotion impairments) and interpersonal relationships. All these factors have a decisive impact on evacuation [6], [7]. Currently, simulations are an effective way to estimate evacuation times under the influence of variable and invariable factors. In the field of emergency evacuations, researchers most often use virtual models to simulate such evacuations. Isobe et al. [8] tested a walking situation and determined that human behavior differed depending on the environment. Yang et al. [9] through computer simulations found out that a point of interest consists in the stairs of buildings, because in case of emergency evacuation, that is the place where the traffic of people is congested. Helbing and Molnar's social model [10] and Blue and Adler's automatic model [11] have often been applied in evacuation simulations. The real-world simulation of an emergency evacuation allows the recording of the entire evacuation procedure. However, the current studies are limited by the environment characteristics and factors that are difficult to implement, such as the multitude of behaviors that people could manifest in such a situation, unforeseen events that obstruct the evacuation process and the diversity of the evacuees. In most of the cases the simulations are limited to the evacuation of some rooms / interior buildings. Emergency evacuation of a location during an event is of major importance due to the large number of persons participating, in general, to the event. The simulation of the large spaces evacuation in dangerous conditions is a necessary measure to prevent or to reduce the number of victims [3]. If, when exposed to a source of danger, the participants at an event are not evacuated effectively, serious consequences may arise as a result. For example, in the tragedy of October 30, 2015 in Club Colectiv 1 , located in Bucharest, Romania, a place where after a fire broke out, 64 people died and another 186 people have been injured. This incident represents the worst fire in Romania in a nightclub and the worst accident in the country in recent decades. In order to prevent such accidents, emergency evacuation planning is essential in carrying out such an activity. Considering the other approaches from the public spaces evacuation scientific literature, it can be observed that in some of these approaches the evacuation population is regarded as a single homogenous population As a result, the emergency evacuation of a large number of people from a location, in the shortest possible time and at the highest possible level of safety, is extremely important. In this context, the present study uses an agent-based model created in NetLogo 6.2.2 for simulating the evacuation process from a large event tent in order to highlight the evacuation times and to find the potential issues that might appear during the evacuation. The use of an agent-based approach for modeling the evacuation population from large event halls is intended to overcome the gaps in the literature which derive from the assumptions made in other approaches. For example, one of the assumptions in the social force models is that the population is homogenous, not accounting for the individual characteristics or behavior of the people involved in such an event. On the other hand, the use of cellular automata models, which are able to account for these characteristics, conducts to complex models, hard to implement and run due to the existence of only one type of agent that should possess at the same time the characteristics of the evacuation population and of the environment. Even more, in the case of cellular automata models, as the evacuation agents are represented by fixed pieces of ground, the movement can be made only from one piece of ground to the otherin this manner, an agent is not able to initiate a movement of a lengths smaller than the size of a patch. This limitation of the cellular automata models makes the models using this approach to move away from the real-life evacuation scenarios. All these limitations can be overpassed in an agent-based model. As a result, the use of agent-based modeling for such evacuation situations comes 1 https://www.euronews.com/2020/10/30/colectiv-fire-romania-sdeadly-nightclub-blaze-is-still-an-open-wound-five-years-on easier as the population can be constructed as heterogenous and, by providing a series of agents, in the agent-based models, the design of the environment, the interactions among evacuation population and the individual rules of movement and characteristics of the evacuees can easily be modeled. In terms of movement, as the evacuation agents are not modeled as part of the ground, but as moving agents on top of the ground, the evacuation agents can stop at any position, not being forced to move from one patch to another. In the present paper, for modeling the evacuation process of a large event hall, six different scenarios are considered, in which we have varied the doors availability, the size and the structure of the evacuated population, the option to evacuate with a friend or a family member and the choice for evacuating through the closest door. The proposed agent-based model can be easily adapted to other types of large buildings by adjusting the characteristics of the event halls from the interface. In the same manner, using the interface of the model, one can easily define the type of population expected to attend such an event, by selecting different characteristics in terms of age, speed, rules of movement, or in terms of behavior, e.g. evacuating with another person or selecting or not the closest exit door as a result of the familiarity with the environment/panic, etc. In order to facilitate the shortest route to the exits, the paper proposes, based on the scientific literature, an "adapted cone exit" approach which provides to the agents the best route from any location they might be within the considered event hall. The contribution of the paper is twofold. First, the paper shows that by using the agent-based approach the evacuation process from a large even hall can be easily simulated and observed by any interested partythanks to the graphical interface offered by such a model, in which the elements involved in such a process are clearly identified (e.g. walls, exit doors, position of each evacuating agent, path chosen by each evacuated agent, obstacles, etc.). Second, the paper proposes an "adapted cone exit" approach through which the agents are able to find the best route from any point to the evacuation exit, making the simulation of the evacuating agents movement closer to a real-life situation. Compared to traditional evacuation methods, the advantages of the proposed approach used in the paper resides from the fact that: (1) the method clearly shows the differences in evacuation times depending on the number of exit routes available during the process; (2) the speed of movement of each individual can be customized according to the age category; (3) the simulated population takes into account the characteristics of a population under evacuation, being able to be diversified in terms of physical peculiarities (e.g. presence of people with locomotion impairment, keeping a specific distance among agents, evacuating with another agent, choosing or not the closest exit). The study has been conducted based on the court of events available in the cold seasons at the Roman Arenas 2 , Bucharest, Romania, one of the locations highly used for public events in Bucharest in both the warm and cold seasons. A series of scenarios are used in the paper in order to determine the changes in the evacuation times due to various factors. Based on the simulation results, coupled with a visual analysis of the simulated scenarios, the identification of the main factors affecting the evacuation process can be easily made and the potential measures for improving the safety during the evacuation process can be timely observed. The paper is structured as follows: section II provides a literature review related to the state of the art in evacuation. Section III briefly presents the selected location and the characteristics of the persons attending the events in the selected location. Section IV discusses the agent-based model assumptions, type of agents and implementation. Section V presents the scenarios considered for the simulations, while Section VI discuss the simulation results. The paper ends with limitations of the research presented in Section VII and discussions, conclusions and further developments in Section VIII. II. THEORETICAL BACKGROUND AND STATE OF THE ART ON EVACUATION Empirical data has shown that the main cause of death and injury of the participants in emergency evacuations is not the disaster itself, but the irrational and impulsive behavior of crowds under the influence of panic induced by the situation [12], [13]. In order to minimize the number of casualties, it is important that architects and engineers design the buildings in an optimal way to evacuate in panic conditions as safely as possible [14]. The behavior of certain individuals under stress is very difficult to predict since each person reacts differently depending on the environmental conditions. This is due to differences in age, gender, culture, physical and mental condition and background [15], [16]. Despite this, during an emergency, crowd behavior tends to follow some common characteristics independent of specific cases [17]- [21]. The emergence of a certain behavior has been observed in evacuation situations with crowded populations, where people tend to collide with each other on their way towards the exits, with the aim of faster escaping the location they are evacuating from in order to protect themselves [12], [13]. This makes an emergency evacuation even more dangerous than a coordinated one, significantly increasing evacuation times [11], [13], [14], [22]. Traditionally, crowd management and building evacuation are analyzed by observing the pedestrians moving in controlled spaces. The movement of these pedestrians is recorded and subsequently analyzed to produce mathematical analytical models that explain the behavior of crowds [19], [23]. These analytical models provide a better understanding for engineers and architects, helping them make decisions about the design and construction of buildings and evacuation procedures. However, the models in question are limited by the complexity of the buildings. The increased processing power of modern computers makes it possible to study the behavior of large populations during the evacuation of buildings through numerical simulations. Two types of evacuation simulation models can be encountered in the scientific research: macroscopic and microscopic [24]. In the macroscopic models the crowds are considered as a single homogenous population, whereas the microscopic models rely on considering individual behavior and interactions. From the macroscopic models, the social-force model, based on a molecular dynamics-based approach, is one of the most-known [25]- [27]. The model considers each pedestrian to be an unstructured particle whose motion is governed by Newton's equations [28]. A series of studies have been conducted to inspect individual-level interactions between people in a complex system with the purpose of exploring the mechanisms involved in the behavior of large human populations caught in an evacuation process [29]- [31]. The emergency evacuation simulations based on the social-force model have been widely used in scientific research [32]- [34]. One of the disadvantages of using such an approach is related to the fact that it does not properly incorporate the different individuals' behavior. Another one is related to the difficulty in implementation due to the relatively high number of nonlinear differential equations and the hypothesis needed for properly establishing those equations [24]. On the other hand, the microscopic models succeed in passing these shortcomings, but have been proven to become expensive due to the complexity they exhibit for very large populations or environments [24]. In order to reduce the complexity of the microscopic models, one can either choose to reduce the complexity of the characteristics and interactions of the considered population, or choose to reduce the complexity of the environment. Regarding the advantages of the microscopic models, one can name the emerging and sometimes unexpected behavior of the mass, which could have not been observed by simply reading the behavior rules of the individuals [35], [36]. Two types of microscopic models can be encountered: discretebased on cellular automata [37]- [39] and continuousbased on agent-based modeling [40]- [42]. As for the studies featuring large-scale crowd evacuation using microscopic models, it can be observed that both cellular automata [43], [44] and agent-based modeling [45], [46] have been used in the scientific literature. A selection of the studies in each of the two modeling categories is discussed in the following. Dang et al. [43] use cellular automata and virtual reality for simulating an evacuation scenario from a shopping mall. Based on the results, the authors state that the proposed approach is more appropriate to the type of selected environment than a simulation conducted using the Pathfinder software as it can easily incorporate a variable related to evacuees' environmental familiarity. One of the criticisms that can be brought to the approach is related to the movement of the evacuees, which can only be made into steps equal to one size unit. The start point, the endpoint and all the intermediate points of the path are set to the center coordinates of the cells, the evacuees not being able to stop in any other point of the cell, as expected in a real-life situation. This limitation is common, in fact, to all the cellular automata models. As a result, the authors introduce in their paper the calculus of some interpolation points, which have the property of smoothing the path of the evacuees, but, as the authors state, the calculation accuracy of the chain navigation grid needs to be further optimized. Abdelghany et al. [44] incorporate genetic algorithms into a cellular automata model in order to find the optimal evacuation plan for a given situation. The proposed framework is applied to a hypothetical pedestrian facility with ten exits. The authors state that the approach can be used in practice, as it provides a superior evacuation plan when compared to a traditional approach [44]. Considering the location to be evacuated, one can state that the design of the location is rather simple, with no elements related to dynamics of the environment conditions, while the presence of obstacles reflects only partially a real-life situation. Wu et al. [47] extend a classical cellular automata model by incorporating a route choice model in the case of a large indoor space. The authors state that the proposed model has the ability to successfully incorporate various aspects related to evacuation. The limitations of the work are related to the interactions among the evacuating agents, which are not included. Considering a stadium scene, Zhou et al. [48] included in their model different types of emotions that can be experienced by the persons involved in an evacuation process, showing that during the evacuation, the individual emotions are changing, reaching calm for all the participants at the end of the simulation. Even though the introduction of emotions into an evacuation model makes it closer to a real-life situation, it is not clear to which extent one can succeed in knowing the personality of the individuals involved in such a process. In the area of agent-based modeling, Ronchi et al. [45] use the Pathfinder software for modeling a large-scale evacuation from a music festival. The authors state that the modeling approach has been appropriate for the situation under investigation, mentioning that, in general, the evacuation models have limited capabilities in representing the complex behavior of the evacuees, as most of them do not account for propagation of information or social influence. Considering the model, more experimental data is needed to be introduced in the model for improving the reliability of the model results, as, in the current form, the model is only based on scarce literature. Ren et al. [49] propose an agent-based model created in Repast software. Based on the simulations, the authors observed a "faster is slower" phenomenon during the evacuation process. As the authors mentioned, the computational costs of such a model should be evaluated if one needs to simulate a more accurate model with an increased number of agents. The evacuation from elevated lecture halls has been simulated using an agent-based model in NetLogo by Delcea et al. [50] with the purpose of determining an adequate seat arrangement for a faster evacuation process. The authors show that the proper combination between the seat arrangement and the position of the evacuation doors can diminish the evacuation time. Siyam et al. [51] provide a comprehensive overview on agent-based simulation in the case of pedestrian evacuation, while Sharbini et al. [52] and Bayram et al. [53] provide an extended review on evacuation planning and management. Chen et al. [54] discuss the approaches, models and tools used for pedestrian evacuation in indoor emergency situations. III. PREREQUISITES Details related to the selected location and types of participants to the events held in the analyzed hall are discussed in the following. A. LOCATION The selected event hall is in a central area of Bucharest, Romania and it is usually used for unreeling international music festivals and international concerts - Fig. 1. The central part of the location provides during a large period of the year (October-May) a heated tent with a length of approximately 45 m and a width of approximately 25 m, being able to accommodate up to 2500 persons per event. This part of the location is the one analyzed in the current paper and we will refer to it as the "hall" or the "tent" in the remainder of the paper. FIGURE 1 The Roman Arenas Location The tent disposes of four ways of access, two of them being used for entering the location in normal use situations and the other two are dedicated to the emergency situations that might occur. Fig. 2 presents the architectural scheme of the tent in which we have marked the two exit doors with A and B, the emergency doors with C and D, the stage is represented in blue, the two storage spaces located in the upper-left and bottom-left sides of the view are marked with cyan, the four pillars near the stage are colored in light-blue, the two food stands marked in orange, the sound-operators stand is marked in magenta, the two token-stands are colored in orange-red, while the three merchant stands located in the right side of the view are marked in light-pink. The dimensions of the tent, the entrances and the obstacles present inside it are proportional with the real ones. It should be noted that the front of the tent is delimited by a fence guarded by security guards and access to the stage is allowed only to the persons with special permission (artists, managers, and photographers). B. POPULATION The persons attending the events held in the tent can be divided into two categories: "standard occupant" and "people with locomotion impairments" [45]. In the first category, one can include children, adults and seniors, while in the second category all the persons with locomotion impairment (wheelchair persons, persons with crutches / canes) are considered. Based on the data reported by the firms organizing events in this location, it has been observed that most of the time, the average number of tickets sold for attending such an event is 2000 tickets. The distribution of the population differs according to the type of event held in the tent. An average distribution determined based on the events held in the prepandemic period has revealed that most of the participants belong to the adults category (approximatively 81.8%), followed by seniors (15.0%), children (3.0%) and people with locomotion impairments (0.2%). Besides the participants, an approximate number of 39 personspart of the staffare present, on average at the location. IV. THE AGENT-BASED MODEL An agent-based model is part of a class of computational models created to simulate the actions and interactions of autonomous agents (both individuals and collective entities) that aim to ascertain the effects on systems or other entities [55]- [57]. These models combine elements of game theory, complex systems, emergencies, computational sociology, operations research, and evolutionary programming [58]- [60]. Different methods are used to introduce randomness when stating the agents' behavior. In the recent literature it has been observed that agent-based models are used in noncomputational scientific fields, including biology, ecology and social sciences [61]- [63]. Agent-based models are related to the concept of multi-agent systems or multi-agent simulations, as their purpose is to provide an explanation for the collective behavior of agents that follow simple rules. Individual agents are characterized as being rational, namely their behavior is oriented for the good of their own interest, using simple decision-making or heuristic rules [64], [65]. Agent-based modeling, in some respects, is the most complex method through which a real environment is reproduced in a controlled environment. This type of modeling integrates environmental data with the behavioral and demographic aspects of the population to provide important data for theoretical studies and estimates. As a result, the agent-based models are virtual models that aim to reproduce the behavior of individuals in a given environment. They are more intuitive than mathematical or statistical models as they can represent objects in a similar manner in which they are seen in reality. Due to the emphasis in the last 30 years on object-oriented programming, agentbased models have become easier to implement, while their structure is easy to model based on mathematical and behavioral statistical models. In the case of large event halls evacuation, an agent-based model has been created using NetLogo 6.2.2 The model's graphical user interface (GUI) is presented in Fig. 3, while a close-up on the elements in GUI are depicted in Appendix B. A series of elements can be configured from the interface and/or by uploading a text file containing the structure of the hall to be represented. Appendix A provides a table of nomenclature for the variables included as inputs and outputs in the agent-based model. The variables that refer to the characteristics of the evacuation population have been extracted from the scientific literature (e.g. speed), the variables needed for building the agent-based model's environment have been taken from the measurements of the concert hall, the variables related to population structure have been extracted from the statistics associated with the usual attendance for the events organized in the selected concert hall, while the variables related to the population's rules of movement have been taken from the evacuation simulations. The configuration of the hall allows the user to establish the structural elements of the hall, such as the dimensions of the hall (length and width), the different stands, the position of the stage, the position of the doors and the availability of the doors. In terms of population, one can configure the number of persons attending an event, the structure of the population attending the event (children, adults, seniors and persons with locomotion impairment), the presence/absence of staff persons, the location of the persons acting as staff at the event, the possibility to set-up families who might/might not evacuate together. A series of assumptions have been stated in order to build the agent-based model, as presented in sub-section A. In terms of types of agents, two types have been used: turtlesagents possessing human characteristics in terms of evaluating the surrounding world and making needed evacuation decisions and patchessmall pieces of ground through which the event hall has been divided into squares, having a series of characteristics which help the turtle agents to better understand the environment and to guide them to the available exits. The characteristics of each type of agent are described in subsection B, along with the agents' movement rules. A. ASSUMPTIONS For building the agent-based model, the considered environment has been divided into small pieces of ground, having a square surface of 0.5m ✖ 0.5m [66], [67]. As a result of this division of the surface into small areas, the rest of the elements represented in the agent-based model (objects, doors, stands, stage, persons, etc.) are scaled to match multipliers of these values. In particular, a person is represented in space by assuming an agent shoulder equal to 0.4558m, an assumption that is in line with previous studies involving human movement and behavior [45], [68]. As two persons cannot occupy the same space, even in the agent-based model, once an agent is located in an area given by the size of its shoulders, the space occupied by the agent cannot be, at the same time, occupied by another agent. The walking area for the evacuees is represented by the flat floor of the tent, all the other objects installed by the organizers such as stands and stage cannot be overpassed by the agents in the evacuation process, representing obstacles that should be bypassed. The movement speed of the agents representing the "standard occupants" involved in an evacuation process has been taken from the data provided by Korhonen and Hostikka [68] as presented in Table 1. As not all the persons involved in an evacuation process have the same evacuation speed, we have considered the uniform probability distribution for the speed as proposed by Korhonen and Hostikka [68] - Table 1. This assumption is in line with the observation stated by Ronchi et al. [45] who mentioned that in order to account for the variability of the people abilities, the unimpeded walking speeds can be determined through the use of distributions. As for the persons with locomotion impairment, the average movement and the speed range have been taken from the research conducted by Hashemi [69]. Regarding the staff member, they have been associated with young adult persons, therefore, they are included in the adult population, having similar movement speed rules. The agents moving towards the exits in the evacuation process are considered rational. As a result of this assumption, when the evacuation process starts, each agent starts moving towards the closest possible exit, using the shortest path to the chosen exit. In reality, the proper choice for the closest door can be made by the agents as a result of the experience/familiarity they have with the location as they might have attended other events in the same location or due to the implementation of some guidance systems by the organizers of the evente.g., guiding light on the floor or a smartphone application. In the case in which one or more of the exits are not available for evacuation, the agents will be aware of this situation and will proceed to the closest exit that can be used in the evacuation process. Also, the doors can be partially available for the evacuation, representing the case in which one half of the door is open, while the other half is closed due to various reasons, such as the impossibility to be opened as a result of a technical issue. For simulating such a situation, the user can choose from the interface, for each of the four exits, the "fully-opened", "half-opened" or "closed" option by using the chooser for each of the doors (exit-A, exit-B, exit-C and exit-D). As the individual evaluation related to the closest exit position can be sometimes subjective when no support is offered in this process and the persons involved in an evacuation process cannot evaluate correctly all the time the closest exitthis might happen due to panic or stress generated by being in such a situation or due to other conditions generated by the emergency situation itself, such as the view of the exits is restricted by smoke and the evacuees are not familiar with the environment as they have not been previously attended another event in the tentthere might be cases in which the evacuees, even though they are assumed to be rational, might wrongly evaluate the closest evacuation door and choose another door which is not the closest door to their actual position. For such situations, the model can be adjusted from the interface and, for a given number of participants, the choice of the closest exit will be suboptimal, namely the randomly selected participant will not choose the closest exit, but rather one of the remaining exits. The two situations, in which all the agents choose for sure the closest door and in which they choose the evacuation door following a probability distribution, are implemented in the agent-based modelthrough the %-participantschoosing-the-closest-exit sliderand will be used in simulations for better observing the differences in evacuation times. When children or persons with locomotion impairment are present at the events held in the tent, it is assumed that they attend the event with an adult agent. As a result, when the population is randomly positioned in the environment, the children and persons with locomotion impairment are always located near an adult agent. In the case of an emergency two situations are possible: the adult might choose to help the child or the person with locomotion impairment to evacuatecase in which the two persons evacuate together, with the adult agent walking all the time behind the children or the person with locomotion impairment at a speed equal to the average speed between the two agentsor the adult choose not to help the other agents and each of the agents are moving towards the exits at their own speed. The choice for one of the two situations mentioned above can be established from the interface, by selecting "on"/"off" in the with-assist switcher. For the situation in which the agents do not necessarily choose the closest exit, as the evacuation exit is determined using the probability distribution, and the adult agent decides to help the child agent or the person with locomotion impairment agent, the evacuation door will be the one selected by the adult agent for both agents. As for the staff members, it is assumed that all the time they will choose the closest exit as they are familiar with the environment. The evacuation process is considered complete when all the persons have evacuated using one of the available exit doors. B. IMPLEMENTATION The agent-based implementation in NetLogo 6.2.2 has been made through the use of the patch and turtle agents. The patch agents have been used for building-up the environment and for providing the information related to the exit doors to the turtle agents. In order to fulfill this purpose, the patch agents possess the characteristics in Table 2. The shortest distance to the available exits is retained using the exit-energy variable of each patch through the use of a vector. The vector contains a number of values equal to the number of available exits. In order to better explain how the values are determined, we will consider in the following a hypothetical situation in which only one exit door is available for a certain room. As a result, the vector exit-energy contains only one value. In order to determine the shortest path (exitenergy) to an exit, an "adapted cone approach" based on Biner and Brun's "cone approach" [31] - Fig. 4 (A)has been implemented, with a few differences. The first difference is related to changing the modality through which the numbers associated to the distances are being determined. In the approach we have used in the agentbased model, the available exits have received "0" for the patches representing them, while the numbers associated with the patches located near-by are increasing following a cone rule instead of decreasing as in [31] - Fig. 4 (B). The second change has been due to watching the evacuation simulations made using the approaches (A) and (B) presented in Fig. 4. Based on simulations it has been observed that when multiple paths are available from an agent's current position to the closest available exit door, not all of them are equal in length. For example, in Fig. 5, an agent located in the upperleft corner of the image, near the obstacle marked in red, on the cell having the exit-energy of 17, can take two paths for reaching the exit patches marked with 0. Following the classical "cone exit" approach, any of the two paths (marked with yellow arrows) are possible to be followed as at every moment of time, the agent chooses among the adjacent patches, the patch with the smallest energy. But looking closer at the length of the paths, one can see that, in fact, they are not equal, as the upper path measures approximately 14 units As a result, in the agent-based model we have implemented an "adapted cone exit" approach ( Fig. 4 (C)) in which the exitenergy values are determined starting from the exit doors and using the smallest value between the exit-energy of adjacent patches updated with the distance between the current patch and each of the adjacent patches. A numerical exemplification is provided in Fig. 6. From the figure it can be observed that the exit-energy of the patch located in the left-bottom, marked with a darker shade of blue than the other patches, is determined based on the exit-energy of its adjacent patches for which the exit-energy has been determined in a previous step. As a result, the exit-energy for the above-mentioned patch is the minimum between the exit-energy of the adjacent patches (1.4, 1 and 2) to which we have added the distance between the middle of selected patches (namely 1, 1.4 and 1). As a result, the exit-energy of the left-bottom patch is equal to 2.4. Considering the same situation as in Fig. 5 (representing a part of the selected event hall from the agent-based model) and using the "adapted cone exit" approach, a partial view on the model developed in NetLogo 6.2.2 representing the lowerright exit (noted with exit D in Fig. 3) is presented in Fig. 7. As it can be observed, an agent located in the upper-left cell of the picture will decide to follow the path marked by yellow arrows in order to arrive at the exit marked with 0. This choice is made by comparing the exit-energy of the adjacent cells, namely 13.1 and 14.5, and by choosing the smallest value (13.1 units) which belong to the path highlighted in Fig. 7. The code for the proposed "adapted cone exit" is presented in Table 3. In the case in which the agents are blocked and cannot advance anymore on the shortest path, in general due to the fact that they met on this pathfor a short period of timeanother agent that comes from ahead, the agents are able to choose the next shortest path available until they "escape" the blockage and then return to the shortest path to the selected exit door. This situation might appear in narrow areas such as the areas between the stands and the walls of the event hall. Compared to a chain navigation grid [43], the "adapted cone exit" offers a more realistic view of the participants evacuation process, with impact on the calculus of the overall evacuation time - Fig. 8. 0 Chain navigation Adapted cone exit The advancement of the agents to the exits is not conditioned by the position they have with respect to a certain patch (e.g., the agent does not have to be positioned in the middle of the patch at the beginning of the walk and does not have to arrive in the middle of a patch at the end of a time unit). As it will be discussed in the following, when the turtle agents are described, the agent's advancement within each moment of time depend on its speed and is made by selecting each time the adjacent empty patch with the smallest exit-energy. The turtle agents are used for representing the participants involved in an evacuation process. Each turtle agent is depicted using a circle of a different color in accordance with the type of participant to the event (child: green, adult: yellow, senior: grey, person with locomotion impairment: red, staff member: blue) - Fig. 9. The turtle agents are randomly created within the event hall at the beginning of the simulation and the proportion of agents from each category is set from the %-children, %with-locomotion-impairment and %-seniors sliders in the agent-based model interface. The staff members are created at random positions in the adjacent areas of their working locations. Table 4. After setting the environment and creating the agents to be evacuated, in a simulation run, the turtle agents proceed to the exit doors based on their chosen-exit-index and at their speed, being guided in the decision related to the chosen path by the values of the vector exit-energy stored at the level of every patch. In order to test the choice for the shortest path to an exit, we have simulated the situation presented in Fig. 7 by manually placing an agent in the patch located in the upperleft corner of the figure. The moves of the agent towards the evacuation exit are provided in a few screenshots in Fig. 10 (it should be mentioned that we have not depicted the moves at every tick as the figure would have occupied more space). From Fig. 10 it can be observed that the agent is following the same path as depicted in Fig. 7. Depending on the speed of the agent, the time needed to cover the distance in Fig. 10 may vary. As the turtle agents advance towards the selected exit at their own speed, the agents can arrive at the end of each moment of time (at each tick) anywhere inside a patch, not being conditioned to arrive in the middle of the patch. The speed of an agent involved in an evacuation process is equal to its speed determined at the beginning of the simulation process but can be reduced at any time if in front of the agent there are agents which move at a lower speed on the same path and congestion appears. The evacuation simulation stops when the last turtle agent has evacuated, no matter the door used for evacuation. Regarding the evacuation process, two indicators can be determined through the agent-based model. The first one refers to the overall evacuation time and measures the time needed for the entire participants population to evacuate from the event tent, while the second one retains the average evacuation time and is determined by summing up all the individual evacuation times and dividing the result by the number of the agents. Both indicators are expressed in ticks in the agent-based model and can be transformed in seconds by dividing the result by 0.4. For validating the model, a population composed by 53 adults (Fig. 11) have been evacuated from a small room, 4m ✖ 13m, featuring a single exit door and a fixed obstacle placed in the central-left side of the room. An informed consent has been obtained from the participants. FIGURE 11 Evacuation of a small population made by adults from a small room The agent-based model has been adapted to fit the new environment and the agents, all adults, have been set on the random positions. A view of the evacuation process corresponding to one simulation run is presented in Fig. 12. The overall evacuation time has been recorded by conducting three rounds of evacuation, with the evacuees placed at random locations within the considered space. The time has been compared with the one obtained through the use of the agent-based model adapted for the new situation, run for 10,000 times by using the BehaviourSpace option in NetLogo [56], [64]. In terms of time differences, it has been determined an average difference in evacuation time of 4.58%, which has been considered acceptable given the randomness in the speed of the agents and the position of the agents among the considered environment. V. SCENARIOS All the simulations have been conducted using the BehaviourSpace tool offered by NetLogo. Each scenario has been run 10,000 times and the average values have been reported in the paper, rounded to the nearest integer. The tool has been designed for conducting a large number of simulation experiments with the agent-based model [64]. An evacuation simulation can be considered complete when all participants in the event have been evacuated to a safe place. In general, a safe area is a location that is not affected by a disaster or emergency. In this article, the safe space is considered to be the outside of the event tent, which means that the simulation is completed once the entire population is evacuated from inside the tent. Fig. 13 presents an evacuation simulation using the agentbased model for the considered event hall with 1500 participants and 39 staff members, all of them choosing the closest exit, in which the display-energy option has been set to "off" for providing a better view of the agents' paths. Additionally, the display-path option has been set to "on" for better observing the paths considered by the agents involved in the evacuation process. The images in Fig. 13 depict the evacuation environment and the position of the agents before the evacuation started (A), at a random moment of time (B), when the participants are only evacuating using Exit D (C) and at the end of the simulation process (D). The scenarios considered in the study have been set up by varying different aspects related to the considered population, such as the doors availability (Scenario I), the size of the evacuated population (Scenario II), the structure of the population (Scenario III) and the choices made regarding evacuating with a friend/family member (Scenario IV) or not choosing the closest door (Scenario V). Each scenario has been divided into sub-scenarios for better highlighting how the variation in the selected indicators will impact the overall result. Each sub-scenario has been simulated 10,000 times and the average results have been reported in the paper by rounding them to the nearest integer. Additionally, a combined scenario has been set up (Scenario VI) in which some of the elements analyzed individually in Scenario I -Scenario V have been included in the same experiment. In all the scenarios, besides the overall evacuation time and the average evacuation time, the average distance travelled has been reported for giving more insight on the paths considered by the agents in their evacuation process. The results for the overall evacuation time and the average evacuation time have been transformed from ticks to seconds, while the values for the average distance travelled have been reported in meters instead of patches for ensuring a proper connection to the units used in our every-day life. VI. SIMULATIONS RESULTS In the following, the results obtained in the case of each scenario set-up in the previous section are presented and discussed. A. SIMULATIONS RESULTS FOR SCENARIO I For setting up the conditions for Scenario I, the population has been held equal to 2000 participants (81.8% adults, 15.0% seniors, 3% children and 0.2% persons with locomotion impairmentas it is in most of the events organized in the selected location) and 39 staff members. Additionally, it has been assumed that the children and the persons with locomotion impairment evacuate individually, all the agents choosing for sure the closest evacuation door. The availability of the four doors has been varied within the sub-scenarios as presented in Table 5. The baseline scenario, S-I. 1., has considered the case in which all the exits are available for evacuation, and they can be completely used in the evacuation process. Scenarios S-I.2. -S-1.5. assume that only three of the evacuation exits can be used at their full capacity, while in the S-1.6. -S-1.11. scenarios only half of the exits are available for the evacuation process. As the case in which only one door is available is hard to be encountered in practice, we have excluded this assumption when building the scenarios. As presented in Table 6, the best values for the reported indicators have been obtained for the S-I. 1. situation in which all the four exit doors are available for the evacuation process. This result was expected, and it confirms the observations made in the research literature related to the fact that a higher number of exits will produce a faster evacuation process. The overall evacuation time is approximately 9 minutes and 7 seconds for S-I.1. On average, an agent needs 3 minutes and 12 seconds to evacuate, walking approximately 26m. For the scenarios featuring the unavailability of one of the exits during the evacuation process, S-I.2. -S-I.5., it can be observed that the most unfavorable situation is the one in which Exit B is closed, which increases the overall evacuation time with 5 minutes and 35 seconds, representing an increasement of 61.24%. As expected, the values recorded for the average evacuation time and average distance travelled are higher in S-I.3. compared to S-I.1. with 1 minute and 42 seconds, respectively with 9m. The lowest impact on the overall evacuation time for S-I.2. -S-I.5 is recorded in the case of S-I.2., when Exit A is closedlower with 4 minutes and 45 seconds than in the case of S-I.3 and higher with 50 seconds than in the case of S-I.1. Even though Exit A is larger than Exit C and Exit D, the impact of not being able to use Exit A for evacuation is smaller on the overall evacuation time than in the case of the other two exits (C and D), with up to 3 minutes and 2 seconds. As the agent-based model offers a GUI that facilitates the observation of the agents' behavior during the entire evacuation process, a visual comparison between the S-I.2. -S-I. 5. scenarios have been conducted on the purpose of identifying the differences in the recorded values of the three indicators reported in Table 6. Based on the observations, it has been determined that the prolonged evacuation time for some of the S-I.3. -S-I. 5. scenarios compared with S-I.2. is due to the congestion around one of the available exit doors. Specifically, it has been observed that in the case of S-I.2. the last agent evacuates through Exit D around T=525s (with T=the moment of time), through Exit B at approximatively T=580s and through Exit C at T=597s. As for the S-I. 3., it has been observed that no agent evacuates through Exit C after T=432s, through Exit A after T=553s, all the remaining agents evacuate through Exit D until T=882s. A similar situation is observed for S-I.4., where the agents are evacuating through Exits A and B until T=348s, respectively T=335s, while on Exit D until T=779s. Lastly, for S-I.5., the agents evacuate through Exit A until T=248s, Exit B until T=513s and Exit C until T=694s. Nevertheless, the evacuation doors position, and their width have a decisive role in the recorded overall evacuation time. For example, in the S-I.2. and S-I. 5. scenarios, when the unavailable doors are Exit A and Exit D, the overall impact of these doors unavailability on the overall evacuation time is reduced compared to the S-I. 3. and S-I. 4. scenarios, when Exit B and Exit C are unavailable. These situations occur as, in S-I.2. Exit A, even though a large door, is located in the very front of the hall and does not "gather" as much agents as Exit B, a large door, which is opened in this scenario and is able to balance the absence of Exit A. A similar situation occurs in the case of S-I.5., where Exit D, even though a small exit placed in the middle-back of the hall with the potential of being the closest exit to more agents than Exit C, is closed and a series of agents decide to use Exit B, a large door which can balance the situation, providing enough space for agents evacuation. On the other hand, considering S-I. 3. and S-I. 4. and the state of the simulations in Fig. 14, it can be observed that after T=553s, respectively T=348s, the only door with congestion is Exit D, which, due to its small dimensions, does not succeed to produce a faster evacuation process. As a result, for this particular case of the considered event hall, one can consider enlarging Exit D as this will conduct to a smaller overall evacuation time in the case in which S-I. 3. and S-I. 4. scenarios occurthe scenarios with the higher overall evacuation time. In terms of average evacuation time, it can be observed that the difference between the four situations (S-I.2. -S-I.5.) is up to 49 seconds, while the average distance travelled is up to 3m. As for the scenarios in which two of the exit doors are unavailable, S-I.6. -S-I. 11., it has been observed that the most unfavorable situations are the ones in which Exit B is closed, either in combination with Exit A (S-I.6.), Exit C (S-I.9.) or Exit D (S-I. 10.). The overall evacuation time for the three situations in which Exit B is closed along with any of the remaining exit doors, range between 16 minutes and 24 seconds and 18 minutes and 16 seconds, being up to 24.26% higher than the case in which only Exit B is closed (S-I.3.) - Table 6. Considering the position of Exit B within the event hall and the size of this exit, the results were expected. Even in terms of average evacuation time and average distance travelled, the three situations (S-I.6, S-I. 9. and S-I.10.) score the higher values among the scenarios featuring two closed exits, the average evacuation time being with up to 85.17% higher than the one recorded for S-I. 3. where only Exit B was closed, while the average distance travelled is with up to 51.52% higher than in S-I. 3. The scenarios featuring two unavailable exit doors that produce the lowest overall evacuation time are S-I. 11. and S-I. 8., both having Exit D closed in combination with either Exit C or Exit A. While for the results obtained for S-I.8. (overall evacuation time of 857s, average evacuation time of 356s and average travelled distance of 41m) were expected given the fact that S-I.2. (having only Exit A closed) and S-I.5. (having only Exit D closed) produce the lowest overall evacuation time (597s, respectively 694s) among the scenarios with one unavailable door, being also lower than the overall evacuation time obtained for S-I.8. (where both Exits A and D are closed), the results obtained for S-I. 11. in terms of overall evacuation time are surprising. By "surprising" we do not refer to the fact that the overall evacuation time for S-I. 11. is lower than all the other scenarios in which two doors are unavailable (S-I.6.-S-I.10) as this result was expected given the fact that both exit doors C and D are smaller than the remaining opened exit doors A and B, which could ensure a faster evacuation process than the case in which exit doors C and D are opened, but surprising in comparison with S-I. 5. scenario in which only Exit D is opened. For a more in-depth analysis, we have run several times the two scenarios, namely S.I.11. with an overall evacuation time of 641s, Exits D and C closed, and S-I.5. with an overall evacuation time of 694s, only Exit D closed. An evacuation state at the end of two simulations for the two scenarios is provided in Fig. 15. As it can be observed in Fig. 15., the difference in the overall evacuation time in favor for the case in which only one door is closed instead of two derives from the imposed rule that the agents evacuate using the shortest path to the available exit. In the case of S-I.5. a series of agents decide to evacuate using Exit C as it is located near them, but it is a smaller exit, which leads to congestion, ignoring the fact that Exits A and B, situated at a longer distance than Exit C, are larger and might represent a better option. This choice conducts to an increased overall evacuation time. Fig. 16 depicts the evacuation time versus the number of evacuated participants for the two scenarios, considering time intervals of 25s. In the case of S-I. 5. approximately 1193 participants evacuate in the first 250s, compared to 947 participants in the case of S-I. 11., while between 250s and 600s the number of evacuees changes in favor of the S-I.11. (1076 participants vs. 773 participants in S-I.5.). On the other hand, at personal level, the average evacuation time and the average distance travelled are lower in the S-I. 5. scenario (245 seconds, 32m) compared to S-I.11. (275 seconds, 38m). Considering these results, it can be stated that for a large number of participants, the evacuation process has been shorter when only one door is unavailable (S-I.5.), but, on the same time there have been some participants, a relatively small number, for which the evacuation process has been longer, increasing their personal risk. Based on these observations, if needed, the agent-based model can be adjusted in order to incorporate congestion as a second criterion when choosing the evacuation door. This improvement applies to some evacuation cases in which the participants have the ability to observe the congestion from the other doors and change their mind related to the selected evacuation door. In reality, in large crowd evacuation cases, due to the large number of participants, it is hard to be in a congestion situation and to have the visibility to other evacuation doors. Even more, for the cases in which the emergency involves a lot of smoke or the lack of light or the occurrence of other hazardous conditions, such as fallen large objects or ceiling, the adjustment of the selected door based on observing the congestion around the other evacuation doors does not seem a realistic scenario. As a general result based on the data provided in Table 6, it can be stated that the position, the number and the width of the evacuation exits impact the evacuation time, but as observed in the results of the simulations made for Scenario I, the results are highly dependent on the overall structure of the event hall, which implies the need for building a model that reproduces as precise as possible the characteristics of the simulated environment. B. SIMULATIONS RESULTS FOR SCENARIO II Scenario II discusses the differences in evacuation time when the number of participants varies. Table 7 presents the cases considered. The S-II.3. situation is the baseline situation as it reproduces the general audience of the event hall, the results being the same as in the S-I.1. scenario. The other three scenarios considered feature the highest number of participants that can attend an event (2500 participants, S-II.4.) and a lower number of participants (1000, respectively 1500 participants). The structure of the participants is the same in all the considered scenarios: 81.8% adults, 15.0% seniors, 3.0% children and 0.2% persons with locomotion impairment, while all the four evacuation doors are available during the entire evacuation process. A number of 39 staff members add to each considered scenario. The results obtained after 10,000 simulations are presented in Table 8, rounded to the nearest integer. Compared to the baseline scenario, S-II.3., a 50% reduction of the number of participants, S-II.1., conducts to a reduction of the overall evacuation time of 47.53%, with a comparable reduction of the average evacuation time of 45.83% and a reduction of the average travelled distance of 34.62%. On the other hand, a reduction of 25% of the number of participants compared to baseline, reduces the overall evacuation time by 23.58%, the average evacuation time by 22.92% and the average travelled distance by 15.38% (S-II.3. vs S-II.2.), while an increasement of 25% increases the overall evacuation time by 23.40%, the average evacuation time by 23.44% and the average travelled distance by 11.54% (S-II.3. vs S-II.4.). Based on the recorded values, it can be stated that the changes in the number of evacuated persons impact all the three indicators, the less affected indicator being the average distance travelled. As for the overall evacuation time, Fig. 17 presents the evolution of the number of evacuated participants over time in the four scenarios. Fig. 17, it can be observed that for a period of time ranging between 75s (S-II.1.) and 225s (S-II.4.), the number of evacuated persons per unit of time is 149 -153 persons, decreasing until the end of the simulation. The period of time corresponds to the period in which all the four exits are used at their maximum capacity and can be easily identified by running the agent-based model and watching the evacuation process. After this period of time, step-bystep, the agents choosing Exit A, B and C finish the evacuation. In all the situations, Exit D is the last door through which the participants are still evacuating. As a result, the evacuation time is prolonged due to the Exit D position and length, with a more accentuated impact on the situations in which the number of participants is large. Based on Even in the case of Scenario II it can be observed that the number of participants has an impact on the considered indicators, increasing their values as the number of participants increases. Besides this influence, the structure of the room plays an important role as the indicators might increase slowly if the position of the evacuation doors and their width are adjusted to better fit an evacuation situation. C. SIMULATIONS RESULTS FOR SCENARIO III As in the previous scenarios, we have held the same baseline scenario as in S-I.1. (called now S-III.1.) and we have built upon varying the elements related to the structure of the evacuation population, keeping them close to the possible real structures of the crowds of persons attending such large events. The sub-scenarios to be analyzed are presented in Table 9. The simulations results are reported in Table 10. The first observation is that, as the number of adults decreases, the overall evacuation time increases. This change in the overall evacuation time can be due to the values of speeds set for the considered categories, with the adults being the category with the highest speed. A decrease of the adult population from 81.8% (S-III.1.) to 15.0% (S-III.6.) determines an increasement in the overall evacuation time of 15.54%. At personal level, the average evacuation time increases with 16.67%, while the average distance travelled is almost the same, the small changes reported in this indicator might be due to the stochastic simulation and rounding to the closest integer. As for the S-III.4. situation, in which a quasi-balanced population (between adults, seniors and children participants plus the persons with locomotion impairment) has been considered, it can be observed that the overall evacuation time increases by 8.96% compared to the baseline. The average evacuation time increases by 9.38%, while the average distance travelled has almost the same value as in S-III.1. As the speed of the population evacuating determines the values of the considered indicators, it can be said, once more, that knowing and evaluating properly the characteristics of the evacuated population and adequately introducing them in the agent-based model can improve the accuracy of the simulation results. D. SIMULATIONS RESULTS FOR SCENARIO IV Scenario IV refers to the case in which the children and the persons with locomotion impairment are helped for evacuation by an adult agent. The children and the persons with locomotion impairment are created in the agent-based model near an adultas they were even in the case of the previous scenariosbut, in this case, they evacuate together with the adult, choosing the same door as the adult with whom they are evacuating, with the adult behind the child or the locomotion impairment person. The speed of the agents evacuating together is determined as the average speed between their by-default speeds. Besides these assumptions, we have kept the same structure of the population as in Table 9, with the only change that we have named the subscenarios using "IV" instead of "III". The results reported in Table 11 have been determined by running 10,000 each sub-scenario and determining the average value of the indicators rounded to the first integer value. A comparison between the overall evacuation time in scenarios III and IV can be observed in Fig. 18. A first observation based on Fig. 18 is that the overall evacuation time is smaller in the case in which the children and the persons with locomotion impairment are evacuating with an adult agent. The improvement in the overall evacuation time is up to 1 minute and 6 seconds. The largest difference between the considered sub-scenarios is recorded between S-III. 4 Overall evacuation time By dividing the category represented in Fig. 20 into adults (Fig. 21) and children and persons with locomotion impairment (Fig. 22), it can be observed that the situation in which the children and persons with locomotion impairment evacuate with adults is a beneficial one for this vulnerable category as in the first 175 seconds of evacuation a number of 358 children and persons with locomotion impairment evacuate in S-IV.4. compared to 251 persons in S-III.4. It can be observed that even the adult category has benefits from the situation in which they evacuate with other agents as it has been observed that by helping the other agents, the entire evacuation process speeds up. As a result, after 200 seconds since the start of the evacuation process, in S-IV.4. a number of 494 adult agents have evacuated compared to 442 adult agents in S-III.4. -Fig.21. Moreover, the senior category, even though it does not contribute directly to the new situation as they evacuate as single persons, has to gain from the fast evacuation process resulted from the fact that the children and persons with locomotion impairment evacuated with an adult agent - 0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425 450 475 500 525 550 575 Based on the simulations results it can be concluded that when the children and locomotion impairment agents are evacuating with another adult agent the overall evacuation process is improved by minimizing the overall evacuation time. As for the average evacuation time, a 23 seconds improvement has been recorded in S-IV.4. compared to S-III.4., with smaller improvement times in the other considered cases. The average distance travelled remains almost the same and it is due to the rules regarding the random initial placement of the agents within the environment and to the fact that the agents choose all the time the closest evacuation door. E. SIMULATIONS RESULTS FOR SCENARIO V For Scenario V it has been assumed that not all the agents are choosing the closest evacuation door. The situation might arise when the participants to the event are not familiar with the environment or in the cases in which due to the low visibility, they are not able to localize the closest exit. Not selecting the closest door can be also a result of panic or a result of a decision to go to a specific area in which one expects to find a relative or a friend. As Haghani and Sarvi [70] mentioned in their study, even though in normal times the proximity to a destination is the most prominent factor when selecting an exit, it has been proven to be completely irrelevant in emergency decisions. The reasons for not choosing the closest exit can be numerous, and one might envision some more considering the psychological aspects involved by being in an emergency situation. We will not discuss them in this part, as the focus is on the changes in the evacuation time when this situation might arise. The envisioned sub-scenarios are presented in Table 12. As in the previous cases, for the baseline case, S-V.1., we have considered the same conditions as is S-I.1., while for the S-V.2. -S-V.5. sub-scenarios we have reduced the percentage of the agents choosing the closest door. As expected, not choosing the closest door leads to increasement in all the three indicators - Table 13. Comparing S-V.1. with S-V.5., it can be observed that a reduction of 20% of the participants choosing the closest door conducts to an increasement in the overall evacuation time of 1 minute and 6 seconds (12.07%), increasing the average evacuation time with 17 seconds (8.85%) and the average distance travelled with 4 m (15.38%). In terms of evacuated persons versus the evacuation time, when comparing S-V.1. with S-V.5. (Fig. 24), it can be observed that in the first 250 seconds since the evacuation started, the when the agents are selecting the closest door, a number of 1439 agents are evacuated, while when only 80% of them are choosing the closest door, a number of 1305 agents are evacuated (with approximatively 9.31% agents less). Based on the simulation results for this scenario, it can be stated that as the agents do not choose the closest door, the overall evacuation time, average evacuation time and the average distance travelled increase. As a result, the interested parties in offering a safe evacuation process should consider applying the proper means through which during such an emergency the population is guided to the closest exit, which results in diminishing the evacuation time and life savings. F. SIMULATIONS RESULTS FOR A COMBINED SCENARIO A combined scenario has been set up in this section, in which elements from the individual scenarios have been considered in order to demonstrate once more that the cumulus of factors that might appear in an evacuation can conduct to a completely different result than the one considered in an "ideal" case in which all the things go right. As the purpose was not to set up values for the variables that are hard to be encountered in an emergency, a "mild" scenario has been considered. In this scenario, S-VI., we have started from the baseline scenario S-I. 1., in which we have altered some of the assumptions. As a result, it has been assumed that one of the emergency exit doors is not available to be used in the evacuation process (Exit C), while one of the main exits can only be partially used for evacuation (Exit B), that the children and the locomotion impairment persons evacuate with an adult and that only 80% of them are choosing the closest door. The results obtained are summarized in Table 14. It can be observed that for this scenario, S-VI, the overall evacuation time is 14 minutes and 35 seconds (with 59.96% higher than in the S-I.1.), while the average evacuation time is 5 minutes and 25 seconds (with 69.27% higher than in the S-I.1.), while an average distance travelled of 41m (with 57.69% higher than in S-I.1.). As for the evolution of the evacuation process in S-I.1. compared to S-VI, Fig. 25 presents the number of evacuated agents versus time. It can be observed that in the S-VI scenario the number of agents evacuated in the first 400 seconds is 1401 agents, compared to 1848 agents in the S-I.1. Once more, it can be stated that for ensuring a proper evacuation process, the knowledge related to the location to be evacuated is of utter importance, along with the characteristics and the behavior of the evacuated persons. As shown in the cases presented above, small changes in different indicators can conduct to various changes in the result, while the combination of the different categories of changes can produce totally different results than expected. As a result, the proposed agent-based model can serve as a tool through which the different aspects related to the modeled event hall are represented in a simplified manner and the characteristics of the evacuees are better shaped. Through the increased number of simulations one can conduct in a short amount of time and through the visual interface offered by the agent-based model, one can better understand which are the elements to be considered and what can be done in order to improve the values of the considered indicators, with a direct result in improving the entire evacuation process. The model has some limitations related to the elements considered in the evacuation process and can be further improved by including different states in which the agents can be as a result of the feelings they have during such an event, states that can determine a specific behavior for each agent. VII. LIMITATIONS OF THE STUDY A potential limitation of the study is related to the fact that the validation of the model has been made only on an adult population. This limitation has not been further investigated in the present paper as, through the validation made on the adult agents, it has been observed that the differences between the overall evacuation time in the case of the adapted agent-based model and the on-site simulations have been below 5%. As a result, it is expected to have the similar differences between the real-life simulations and the agent-based model in the case of the other categories of people considered in the model. Even more, the agents' speed is a parameter in the model, so anyone interested in conducting his/her own simulations on a population already known, can easily introduce the speeds of the considered categories of persons and observe the behavior of the evacuees and the values of the resulting variables. Another limitation is given by the number and types of variables used in the agent-based model. We acknowledge that with the increasement of the number of variables considered in the study, the incidence of each variable can be better shaped. Even more, it would be interesting to observe the result of the cumulated factors on the evacuation time. Additional statistical analysis can be conducted on the obtained results in accordance with the specific of the study one decides to conduct. Lastly, the paper focuses on a general evacuation process, as described above. Considering some specific evacuation processes (e.g. fire broken out in fixed points of the room), one can observe that the impact of the considered factors might change. This limitation refers to the results obtained for the output variables. As for the agent-based model, it can be adapted to better fit the situation one needs to simulate. VIII. CONCLUDING REMARKS The paper discusses the use of agent-based modeling and simulation in the context of large event hall evacuation for better understanding the overall evacuation process and for observing the elements that can contribute to a prolonged evacuation process. On this purpose, an agent-based model is built in NetLogo 6.2.2 and an "adapted cone exit" approach is proposed, which is useful in guiding the agents to the closest exit. A series of scenarios have been envisioned and simulated for better observing how the changes in different aspects of the evacuation process can influence the overall evacuation time, the average evacuation time and the average distance travelled by the agents. Specifically, for the selected event hall, it has been observed throughout the simulations that when the number of participants attending an event is increased, Exit D is the last door through which the evacuation takes place. As a result, it is recommended, that in the future, if possible, the width of this exit to be increased in order to reduce the evacuation time. Even more, as Exit B plays an important role in the evacuation process, it is advisable for the persons in charge to take all the measures for keeping it available for the emergency situations by checking the space around the door and making sure that the door is not blocked or accidentally locked. As choosing the closest door by the evacuees has a positive impact on the evacuation time, it is advisable that the persons in charge should take all the measures to ensure that the attendees to the event are familiar with the positions of the evacuation doorsthis can be made by placing a small map on the back of the ticket or by playing a small video on the screen just before the start of the event. Nevertheless, helping other persons to evacuate should be among the elements to be kept in mind for decreasing the evacuation time. As a result, increasing the awareness for helping other in-need persons can be made through short videos or messages posted in the surrounding area of the event. Given the flexibility of the agent-based model, the behavioral aspects of the evacuees and the environment characteristics of the evacuated space can be effectively represented in the model. In particular, given the simulation results and the visual analysis that can be conducted individually on each scenario, the predominant factors affecting the evacuation process can be identified and the potential measures can be properly evaluated prior to be implemented in practice. As a result, the evacuation process safety levels can be improved. Future research can expand the model by including more information related to the vulnerable population, the feelings and reactions one might experiment in an evacuation process with effect on the evacuation behavior or the impact of specific events (e.g., different levels of gas concentration, the occurrence of multiple fire points within the event hall) on people's behavior and evacuation process. The impact of each newly considered element can be further put in connection with the resulting variables in order to better observe which of these elements are the ones leading to the greatest changes in the resulting variables. For this type of analysis, grey systems theory can be used, by computing several degrees of grey incidence, pointing in this manner the hierarchy of the elements with the highest incidence on the evacuation time.
2022-05-10T16:22:32.264Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "136a6efa1a22e8f4c633a96ce50bf6bd563d1776", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09766303.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "7b06efd1be1d186b8cab2923c46b51081634c2d7", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
7807763
pes2o/s2orc
v3-fos-license
Unraveling Natural Killer T-Cells Development Natural killer T-cells are a subset of innate-like T-cells with the ability to bridge innate and adaptive immunity. There is great interest in harnessing these cells to improve tumor therapy; however, greater understanding of invariant NKT (iNKT) cell biology is needed. The first step is to learn more about NKT development within the thymus. Recent studies suggest lineage separation of murine iNKT cells into iNKT1, iNKT2, and iNKT17 cells instead of shared developmental stages. This review will focus on these new studies and will discuss the evidence for lineage separation in contrast to shared developmental stages. The author will also highlight the classifications of murine iNKT cells according to identified transcription factors and cytokine production, and will discuss transcriptional and posttranscriptional regulations, and the role of mammalian target of rapamycin. Finally, the importance of these findings for human cancer therapy will be briefly discussed. T-cells and MR1-restricted mucosal-associated invariant T-cells cells (7), but these populations will not be discussed in this review. Antigen recognition by NKT cells and their development within the murine thymus will be discussed. Recent publications suggest a classification of murine iNKT lineages according to their transcription factor (TF) expression and cytokine secretion. Therefore, the author will discuss transcriptional and posttranscriptional regulation of iNKT cell development and function, and the role of Mammalian Target of Rapamycin (mTOR) within iNKT cell subsets. This new lineage concept will be compared to the previous categorization into three developmental stages. inKT AnD nKT_ii CeLL AnTiGen ReCOGniTiOn Unlike convT cells, iNKT cells bear a semi-invariant TCR, upon rearrangement of a single TCR α chain with a unique Jα segment, in combination with limited TCR β-chains usage. This results in a rearranged Vα14-Jα18/Vβ8, Vβ7, or Vβ2 TCR in mice and Vα24-Jα18/Vβ11 in humans (1). Human iNKT cells can be divided into CD4 + , CD8 + , and CD4 − CD8 − subsets; and murine iNKT cells into CD4 + and CD4 − CD8 − (7). This TCR shows unique reactivity to the glycolipid αGalCer bound to CD1d (8), and CD1d-αGC tetramers have proven an invaluable tool to study iNKT cell biology (9). Conversely, NKT_II cells use different combinations of TCR chains, both in mice and humans. Due to their diverse TCR rearrangement, one possibility to study murine NKT_II cells is by comparing mice lacking only iNKT cells [Jα18-deficiency (10,11)] with mice lacking all NKT cells [Cd1d-deficiency (12)]. Using a Jα18-deficient interleukin (IL)-4 reporter model, type II NKT cells can be tracked by their expression of GFP and TCRβ (13). This model has allowed to demonstrate that murine NKT_II cells display diverse αand β-chains with dominant Vα8 and Vβ8.1/8.2 chains (13). Even though NKT_II are dominant in humans (6), due to their TCR chain diversity and the lack of specific reagents to identify them, they have not been studied as intensively as iNKT cells. Thus, many details of NKT_II subsets are ill defined. What is currently known about NKT_II cells has been recently reviewed (14) and will not be further discussed within this review. Both NKT cell types share the recognition of various lipid antigens presented on CD1d molecules (1), but use different complementarity-determining regions loops for antigen binding (6). Like convT-cells, NKT cell types are selected within the thymus (1). OveRview OveR THe LineAGe FATe wiTHin THe MURine THYMUS CD4 − CD8 − lymphoid precursors travel from bone marrow via blood to the thymic corticomedullary junction (15). Due to the close contact with thymic epithelial cells and mechanisms, which will not be discussed in this review, the "thymocytes commit to a T-cell fate" with TCR rearrangement and upregulation of CD4 and CD8 (15). At this stage, the NKT cell population seems to split from convT-cells (7). iNKT cells are selected if their TCR recognizes self-or foreign lipid antigens on CD1d molecules expressed by CD8 + CD4 + thymocytes [double positive (DP)] (16). Furthermore, iNKT cell development needs the expression of NFKB-activating protein and histone deacetylase 3 (17) and depends on microRNAs (18,19). As the Jα18 rearrangement is a late event, DP cells need to survive a distinct period of time. Thus, all mutations limiting the lifespan of DP cells affect iNKT development (20). Further differentiation and maturation of CD69 + CD24 + iNKT precursor cells is initiated by parallel binding to the co-stimulatory signaling lymphocytic activation molecules (SLAMs), SLAMF1, and SLAMF6, which signal downstream via the SLAM-associated protein (SAP) (21). SLAMF6 augments downstream phosphorylation due to enhanced TCR signaling, increasing the expression of the TF Erg2 (22). iNKT cells were also shown to receive stronger TCR signaling compared to convT-cells (23). Interestingly, stimulation by the convT-cell co-stimulatory molecule CD28 induced only a minor increase in Erg2 expression (22). ERG2 binds to the Zbtb16 promoter region, which induces the expression of the TF promyelocytic leukemia zinc finger (PLZF) (22), a master regulator of iNKT cell development and function (24). Zbtb16-deficient mice are unable to develop iNKT and NKT_II cells further than the naïve state (13,24), showing the importance of PLFZ in early NKT development. In line with these findings, SAP-deficient mice show a decrease in PLZF expression in early developmental stages in iNKT cells (25) and decreasing NKT_II numbers by 10-fold (13). In this early developmental state [which was originally defined as stage 0 (1)], NKT cells express the surface molecules CD69 + CD24 + CD4 + CD8 +/− (1, 13) and express the TFs ERG2, and PLFZ. THe DeveLOPMenTAL STAGeS OF MURine inKT CeLLS Three developmental iNKT stages based on cell surface molecule expression of CD44 and NK1.1 have been described (Figure 1). However, this categorization is not ideal, as NK1.1 is not universally expressed in all mouse strains (26,27). Recently, iNKT cells were categorized according to TF and cytokine expression profiles into iNKT1, iNKT2, and iNKT17 lineages (26)(27)(28), and these were mapped into the developmental stages (26,28,29) (Figure 1). The new classification of iNKT cells alternative to the shared developmental stages favors clear lineage separation (27,28,30). This review will give more insight into the newly defined iNKT lineages and will discuss the relationship between the three groups in relation to the developmental stages. Of note, evidence of more iNKT subsets exists (2). Using this method, transcriptome analyses showed three distinct populations in principle component analyses (PCA) (28,31). Using several RNA sequencing methods, one study identified unique homing molecules within individual iNKT subsets in C57Bl/6 mice: CXCR3, CCR5, and VLA-1 for iNKT1, CCR4, and CCR9 for iNKT2, and CCR6, Itgb4, Itgb5, and Itgb7 (encoding for integrin subunits) for iNKT17 (31), which may explain their difference in tissue distribution and corresponding altered cytokine profile of the three subsets (32). In a different paper, the Hogquist group used RNA sequencing and microarray data from Balb/c and C57Bl/6 mice to investigate the relationship between the above described iNKT cells with other cell subsets including innate lymphoid cells (ILCs), T-cells, and natural killer (NK) cells (28). The iNKT1 transcriptome was similar to TH1, ILC1, γδ T-cells, and NK cells (28), which also express IFNγ. iNKT2, and iNKT17 showed more transcriptome similarity to their "respective ILC and γδ T-cell counterpart, " but not to TH2 and TH17 (28). As ILC precursors express PLZF (33), the authors suggested PLZF as master TF for innate like T-cells and ILCs (28), indicating a more "unidirectional gene programming in IFNγ expressing cells" (28). It would have been interesting to know if the authors found other possible interesting regulatory genes, as they only acknowledged already described genes for the three different iNKT populations, yet, these genes did not show the highest fold change within the volcano plots. In order to produce IFNγ, T-bet and its co-factor Bhlhe40, which opens the Ifgγ locus, are needed (35). Besides a crucial role in early iNKT development, Erg2 expression also seems to be essential for further iNKT1 development. Erg2-deficient thymocytes do not develop past developmental stage 2 (34). Besides binding to the Zbtb16 promoter (22), Erg2 can bind to the Il2rb promoter (34), inducing the expression of CD122, a shared component of the IL2R (36) and IL-15R (29,36). The responsiveness to IL-15 is needed for final development into stage 3 NKT cells (34). As only iNKT1 cells were described to belong into this stage, the signaling via IL-15 could lead to downstream cell intrinsic restructuring programs favoring an iNKT1 fate. In favor of this hypothesis is the demonstration that IL-15 signaling regulates T-bet in murine CD8αα + intraepithelial lymphocytes (37). Whether this also applies to iNKT cells remains to be elucidated. CD14 + monocytes/macrophages, and to some extent B cells, were shown to produce IL-15 within the medulla and in cortical clusters within human thymi (38). This might be the source of IL-15 for iNKT1 cells. Another control mechanism is the upregulation of the microRNA let-7, which leads to a downregulation of PLFZ as "two conserved binding sides were found in the 3′UTR" of Zbtb16 (29). Further, the mRNA expression profiles of Zbtb16 and let-7 showed inverse correlation (29). Interestingly, this paper showed conserved let-7 binding sides in mice and human, leading to the question if let-7 is also regulating expression profiles in human iNKT cells. However, a downregulation of PLZF in iNKT1 cells was only shown within the thymus (29), proposing the role of other mechanisms in peripheral tissues (29). Additionally, NKT subtypes might also be selected via their TCR signaling capacity, as FcεR1γ-deficient mice showed a decreased iNKT1 cell count, but an increase in iNKT2 cells (27). An upregulation of the FcεR1γ chain, generally known as part of the high-affinity IgE receptor, was detectable in iNKT1 cells (27). Together with CD3ζ, the FcεR1γ chain can form the natural cytotoxicity receptor NKp46 (39), first described in NK cells, while in T-cells, it could lead to an altered TCR signaling (27). By cell surface molecule classification, iNKT2 cells are thought to belong to developmental stage 1 and 2, sharing stage 2 with iNKT17 cells (26) (Figure 2). iNKT2 and iNKT17 cells were also shown to share gene expression patterns (27), including Gata-3 (26), Irf-4 (26), and Zbtb16 (27). It is difficult to judge if these findings are universal, as one paper has a limiting statistical power of two, but uses Balb/c and C57Bl/6, while the other paper shows exclusively C57Bl/6 data. A recent publication highlights the possible importance of SAP for driving an iNKT2 fate. SAP-deficient mice showed decreased expression of Gata-3 and Zbtb16, but an increase of Rorγt leading to 10-fold more iNKT17 cells in these mice (25). Hardly any difference in iNKT1 cell count or percentage was detectable (25). The serine protease SerpinB1 is associated with regulation of TH17 and IL-17-producing γδ T-cells (41). Interestingly, SerpinB1-deficient mice showed a percentile increase of iNKT17 cells, even though total iNKT cells numbers remained unchanged (27), leading to the authors' conclusion that SerpinB1 is a negative regulator for IL-17 producing cells (27,31). Another regulatory TF for iNKT17 cells could be Bcl11b, as PLFZ cre Bcl11b fl/fl mice showed an overall reduction in iNKT cells. This was due to reduced survival, with a higher percentage of cells within stage 0-2 and a reduced stage 3 subset (40). By analyzing the specific TFs and cytokine secretion, these mice showed reduced T-bet and IFNγ expression and reduced IL-4 expression, but similar Gata-3 expression compared to WT (40). Simultaneously, Rorγt and other iNKT17-associated genes, which were found exclusively expressed on iNKT17 cells (31), were upregulated not only in iNKT17 cells but also in iNKT2 and iNKT1 (40). Cross Antagonism in inKT Cells Initially, it seems contradicting that only iNKT2 cells are affected by decreased PLZF expression, as iNKT17 and iNKT2 cells are thought to express the same developmental stage surface molecules and were both shown to express PLZF. High expression of PLZF might not be mandatory for iNKT17 differentiation, but may be needed for iNKT2 and iNKT17 to separate from an iNKT1 fate, as mature iNKT1 cells show low PLZF expression. In favor of this is the cross antagonism of TH1 and TH2 (42), where Gata-3 and T-bet can inhibit one another and decide the cell fate (42). However, evidence is growing against this assumption, as lineages have been shown to not necessarily arise from precursors, but can arise from "direct conversion" from one type to another through genetic reprogramming (43) and due to "poised" epigenetic stages (44). This might explain why, in developmental stages 2 and 3, a co-expression of Ifnγ and Il-4 mRNA was detected (29) and Tbx21-the gene for T-bet and CXCR3 can be found in iNKT2 cells (31). As an antagonism of Gata-3 and Rorγt has not been reported yet, there is the possibility that iNKT2 and iNKT17 cells cannot be seen as two separate populations. It could be possible that iNKT17 cells can convert into iNKT2 cells depending on the microenvironment as suggested by Waddington's epigenetic landscape in 1957. This would explain their shared genetic program and developmental stage surface molecules. Transcriptome analyses support this as Gata-3 expression was not unique to iNKT2 cells and could also be found in iNKT1 and iNKT17 cells (27,28). However, only iNKT2 cells were shown to secrete IL-4 (26,27). As Gata-3 is seen as the regulatory TF for IL-4 expression and TH-like lineage fate (26), posttranscriptional regulation must be present to inhibit Gata-3 from binding to the IL-4 promoter region. Two recent papers suggest a role in microRNAs to control Gata-3 expression. The "genetic variant rs1058240" and "microRNA-720 are proposed to bind to human Gata-3 3′UTR" (45,46). The overexpression of microRNA-720 leads to a reduced expression of Gata-3 mRNA and protein levels as well as to a decrease of surface molecules associated with human alternative macrophage activation (46). However, the authors did not study the effect of the reduced Gata-3 expression in respect to IL-4 expression. Further evidence might emerge from analyses of the epigenetic status of the lineage regulatory genes within convT-cells (44). TH17 cells express majorly permissive H3K4me3 at the Gata-3 locus, thus TH17 cells might be able to convert into TH2 cells (44), while TH2 cells show repressive H3K27me3 marks at the Rorγt locus (44). Even though this needs to be validated within iNKT cells, it might explain why only negative regulators have been found to give rise to iNKT17 cells. Interestingly, the deficiency of Runx1 (47) and c-Maf (48), which have been identified to be expressed in all three iNKT subsets, can lead to selective impairment of iNKT17 differentiation. Runx1 is essential for overall iNKT development, proliferation, and survival (47), while c-Maf is upregulated in αGalCer-activated iNKT cells (48). Runx1-deficient mice showed a significant decrease in overall thymic iNKT counts, but showed only an iNKT17 deficit (47). Within c-Maf-deficient mice reduced Rorγt expression and corresponding IL-17A production were found, but normal iNKT development (48). Both studies can be seen in favor of iNKT conversion, as essential iNKT TFs are required for iNKT17 differentiation and are not unique to iNKT17 cells. mTOR effects on inKT Development Besides transcriptional regulation, the mTOR pathway has also been described to regulate iNKT cell fate. mTOR is a serine/ threonine kinase, which regulates cell growth and metabolism. Two different mTOR complexes can be found: mTOR complex 1 containing Raptor, which is involved in "translation initiation, autophagy inhibition, lipid synthesis" (49) and control innate and adaptive immunity (50), and mTORC2 containing Rictor, which is "involved in actin remodeling and nutrient uptake" (49). Both pathways were shown to contribute to iNKT development, as iNKT cells frequencies were reduced in CD4 + T-cell-specific Raptor and Rictor conditional knockout mice (49,50). In CD4 cre Raptor fl/fl mice, the authors reported an accumulation of iNKT cells within stage 0, two-third in stage 1, one-third in stage 2, and an absent stage 3 (50). The remaining iNKT showed high PLZF expression, consistent with the early developmental block. Consistent with loss of stage 3 iNKT (iNKT1) cells, the authors also showed a decrease in T-bet expressing iNKT cells with concomitant enrichment of stage 1 iNKT (iNKT2) cells. However, the composition of stage 2 iNKT cells regarding iNKT2 and iNKT17 frequency was not fully elucidated. Published literature is controversial regarding, which of the described iNKT subsets is affected in CD4 cre Rictor fl/fl mice. Two papers showed a cell intrinsic defect in iNKT cell development in the absence of Rictor (30,49). However, while one group (49) demonstrated a substantial effect on NKT2 development and thymic IL-4 secretion and GATA3 expression, a second group reported a selective effect on the NKT17 lineage (30). The source of animals and the influence of the biomedical establishment on the microbiota could explain differences in the detection of NKT17 subset. Of note, autophagy has also been described to play an essential role in iNKT cell development (51,52). In T lymphocyte specific conditional knockout mice (CD4cre) lacking the essential autophagy genes Atg7 (51,52) or Atg5 (52), iNKT cell development was blocked at an early stage and no mature peripheral iNKT cells were found (51,52). It is known that cell fates determine the overall direction of the immune response, for example, IFNγ production, seen in human NK, T-cells, and iNKT cells, is important for antitumor responses (55). Thus, increasing IFNγ-producing cells is one goal for tumor therapy. As iNKT cells-in contrast to CD3 + T-cells-have been shown to be unaffected by the suppressive effects of CD15 + granulocytic myeloid-derived suppressor cells in head-neck cancer patients (56), they represent an interesting tool for tumor therapy. A recent Phase I clinical trial adoptively transferred iNKT cells into stage IIIB-IV melanoma patients after in vitro expansion with anti-CD3 and IL-2 proved to be safe and tolerable (57). Even though patient iNKT cells showed majorly enhanced IFNγ production posttreatment compared to pretreatment, they also produced IL-4 (57), which is associated with asthma (53,58), and the anti-inflammatory cytokine IL-10 (57). As both cytokines can induce unwanted side effects within patients, understanding the molecular mechanisms during cell fate decisions could be beneficial for therapy. Thus, understanding transcriptional regulations within murine models can benefit human cancer therapies. COnCLUSiOn Looking at these data within this review, one can find studies in favor of the developmental stage theory and studies against it. In favor of undergoing developmental stages is the distinct cutoff at stage 2 in Erg2-deficient mice (34) and negative regulatory genes control of iNKT17 development (27,40), which could also be seen as a separation from iNKT2 cells occupying stage 2. Also, CD4 cre Raptor fl/fl mice showed an accumulation of iNKT cells within stage 0 and a reduction in stage ½ (50). All these studies suggest a shared developmental pathway within iNKT cells. In favor of lineage differentiation is the increased iNKT17 population in SAP-deficient mice with normal iNKT1 cell counts and an absent iNKT2 population (25) and the observation that TCR signaling strength as seen in FcεR1γ-deficient mice might give rise to one population instead of another (27). Further, if a shared developmental stage is assumed, iNKT2 and iNKT17 sharing stage 2 should cluster more within the PCA (28,31). All in all, murine iNKT cell development still seems to be puzzling. Overall some differences in iNKT subset detection may be semantic and depends on individual mouse strain used. Furthermore, microbial effects in mice within different breeding facilities may influence different iNKT subset composition seen within different publications. Nevertheless, more insight will be gained by deeper transcriptional analyses parallel to phenotyping, as these analyses are currently limited to 20 fluorophores. Unbiased approaches such as Cytof or tSNE may further reveal iNKT cell differences and may account for the observed mouse strain specific differences. Furthermore, both approaches can reveal more insights into human iNKT cell development and highlight how these cells can be used more effectively in cancer therapy. AUTHOR COnTRiBUTiOnS SB has designed, written, revised, and approved of the review herself. She is accountable for all aspects in this review. ACKnOwLeDGMenTS The author acknowledges the MSc Integrated Immunology course at the University of Oxford. This review originated from its teaching, learning, and assessment activities, and the course defrayed the publication charges. The author also acknowledges Dr Mariolina Salio for critical reading the review. FUnDinG The author's stay in Oxford was funded by a scholarship from the Stiftung Begabtenförderung berufliche Bildung-Gemeinnützige Gesellschaft mbH, which is a scholarship for mature students, by the German Federal Ministry of Education and Research, and by private resources.
2018-01-09T18:04:39.290Z
2018-01-09T00:00:00.000
{ "year": 2017, "sha1": "4ca10c0fabfec7614f2808f9c82f0d7dfc901835", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01950/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ca10c0fabfec7614f2808f9c82f0d7dfc901835", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18339687
pes2o/s2orc
v3-fos-license
A nonlocal gradient concentration method for image smoothing It is challenging to consistently smooth natural images, yet smoothing results determine the quality of a broad range of applications in computer vision. To achieve consistent smoothing, we propose a novel optimization model making use of the redundancy of natural images, by defining a nonlocal concentration regularization term on the gradient. This nonlocal constraint is carefully combined with a gradient-sparsity constraint, allowing details throughout the whole image to be removed automatically in a data-driven manner. As variations in gradient between similar patches can be suppressed effectively, the new model has excellent edge preserving, detail removal, and visual consistency properties. Comparisons with state-of-the-art smoothing methods demonstrate the effectiveness of the new method. Several applications, including edge manipulation, image abstraction, detail magnification, and image resizing, show the applicability of the new method. Introduction Image smoothing is a fundamental and important issue in computer vision.Natural images contain both clear structural edges of objects and abundant details caused by lightness, textures, and so on.Psychological studies show that human beings tend to pay more attention to the outlines of objects than trivial details [1].Indeed, images containing main structures but without details can be of use in many applications such as edge extraction, image abstraction, and tone mapping.Image smoothing aims to produce images which discard insignificant details while preserving the main structural edges.Because of the complexity of natural images, it still remains difficult to give explicit definitions that a computer can use to distinguish between main edges and trivial details: human beings can make flexible decisions. During the past decades, image smoothing has attracted much research.In terms of approaches, previous methods can be loosely classified into two groups, edge-preserving methods and structurepreserving methods.Edge-preserving methods [2][3][4][5][6][7] consider that human eyes are sensitive to color changes between neighboring pixels, so they aim to preserve strong-contrast edges.However, they cannot remove fine-scale details with large or oscillatory gradient amplitudes.Structurepreserving methods [8][9][10][11][12][13][14][15][16][17] design smoothing models based on the assumption that sliding windows containing different patterns, such as structure and oscillatory details, behave differently with suitable measurement.Although fine-scale details can be smoothed out, edges tend to be shifted from their original positions for natural images because of the patch-wise operator.In fact, all previous works share a common defect that the quantization measure cannot reflect visual importance properly.Using contrast or local statistical responses, computer cannot distinguish details from structures exactly.Therefore, the main goals of image smoothing, detail removal, and edge preserving, are often in conflict. It is worth noting again that we aim to smooth out details while preserving main structures in a consistent manner.The output image should be composed of sharp structural edges and homogeneous regions.After analyzing the properties of gradient maps in natural images, we find it is reasonable and practical to assume the ideal gradient map of a smoothed image should be sparse in space and nonlocally concentrating in amplitude.In order to achieve consistent smoothing performance, we propose a novel image smoothing method which we call the nonlocal gradient concentration (NGC) method. NGC is data-driven and can be formulated as an optimization problem.In the new model, a nonlocal self-similarity property is assumed, leading to a nonlocal concentration constraint on the gradient map.Specifically, under the guidance of nonlocally similar patch groups, unstable details can be reduced, and meaningful structures which are lower in contrast can be kept.Nonlocal self-similarity is an intrinsic and useful property for natural images, arising due to redundancy.Typically, it is easy to find similar patches to a given patch.In computer vision and image processing, this property has been adopted as prior knowledge to achieve significant and surprising improvements.Exploiting self-similarity in color and intensity space and in the transform domain, has led to many state-of-the-art algorithms for various applications [18].However, it has not been fully adopted in image smoothing.As far as we know, our work is the first to use nonlocal similarity on the gradient map for image smoothing.Some results produced by our method are illustrated in Fig. 1, showing that our method can remove details while keeping structural edges. The main contributions of our work are as follows: • We introduce a nonlocal constraint on the gradient map for the first time in image smoothing, and use it as a basis for a new optimization framework.• We present an efficient iterative algorithm for optimizing the new energy model.• We demonstrate the ability of our method in several applications. Previous work This section reviews some representative edgepreserving and structure-preserving smoothing methods and analyzes them with an example shown in Figs. 2 and 3. Edge-preserving methods Filtering using a weighted-averaging operation is a common scheme.Tomasi and Manduchi [2] proposed the simple and popular bilateral filter (BLF) in 1998. For each pixel i in input image f , the output u i is computed as where {w i,j } are weights for all pixels in f .These are in inverse proportion to the spatial distance, and color or intensity difference.This prevents strong edges from being over-smoothed.He et al. [3] proposed a guided filter in 2010, where a separate image is adopted to guide the linear translationvariant filtering procedure.It is similar to BLF in its edge-preserving properties, but it solves the gradient reversal problem in BLF and performs better near edges.The local Laplacian filter [4] Optimization methods with regularization involving edges are another popular approach.The general form used in such optimization methods is to solve arg min where R(u) is a regularization term and λ is the regularization parameter.Rudin et al. [5] proposed the well-known total variation (TV) regularizer in 1992, which uses the L 1 norm of the gradient map.Farbman et al. [6] proposed an alternative edge-preserving operator based on weighted least squares (WLS) in 2008.WLS involves an L 2 norm.However, both TV and WLS penalize large gradient amplitudes, so image contrast is affected, as analysed in Ref. [7].As shown in Fig. 2(e), image intensity varies significantly from the input image.In order to get rid of the impact of gradient amplitudes, Xu et al. [7] proposed a new regularizer using the L 0 gradient norm in 2011, arg min . It can preserve salient edges globally without blurring, as shown in Fig. 2(f). Edge-preserving methods assume that pixels with large gradient magnitudes are located on important edges explicitly or implicitly.As shown in Figs.2(b)-2(f), these methods are unable to remove small-scale textural details. Structure-preserving methods Structure-preserving methods rely on statistical features within local sliding windows to remove oscillations and extract structures. Mode filters can suppress details by analyzing the histogram within a local sliding window; the median filter is the classic example.Subr et al. [8] proposed use of local extrema envelopes in 2009.First, maximum and minimum envelops are constructed respectively using extrema detected in local sliding windows.Highly contrasting oscillations can be removed simply by computing the smoothed mean envelope.Xu et al. [11] proposed the relative total variation (RTV) method in 2012.They observed that the inherent variation (aggregation of signed gradient) in sliding windows with texture is much smaller than in windows with structure.RTV can remove textures in mosaic images well, but it performs less well for natural images because complex lighting and perspective distortion make RTV over-smooth details.Karacan et al. [12] proposed a region covariance (RC) method in 2013.They made use of covariance matrices of simple image features to capture local structure and texture information in local patches.It can preserve prominent edges and shading while removing texture, but the resulting structure edges lack sharpness.A novel image decomposition method [13] was proposed by Su et al. in 2013.They applied a Gaussian decomposition and an asymmetric sampling operator to separate texture from structure, and then used a joint bilateral correction to suppress blurring.However, it does not perform well in practice because too many variables are involved.The tree filter [15] was proposed by Bao et al. in 2014.It is a trilateral filter in which for weighted-averaging, a tree distance is adopted as well as the two factors in BLF.The minimum tree can deal with fine-scale details. As shown in Figs.2(g)-2(j), with these methods, the edges are either blurred heavily or shifted mistakenly.As shown in Fig. 2(l), details on the ostrich are removed and edges of the ostrich are kept consistently by our method.Figure 3 analyzes a scanline further.Figure 3(l) shows that the oscillations on the neck are smoothed thoroughly, while contrast near the two main features is retained well without being over-smoothed. Model As noted in Section 2, smoothing methods with gradient constraints have been studied, but their performance is unsatisfactory.To achieve consistently smoothed results, we propose a nonlocal optimization model in Section 3.3 based on the observations made in Section 3.2.Before describing our new model, we first review the general formulation of optimization with gradient constraint in Section 3.1. Background Gradient information is vital to the human visual system. Making use of pixel-wise gradients provides the advantage that edges do not shift from their original positions. Therefore, we focus on optimization based methods with gradient constraints in this paper.The general form of a gradient-based optimization model for image smoothing is arg min Motivation The key to the model defined in Eq. ( 3) is to make reasonable and practical assumptions about the gradient map of the ideal smoothed image.An ideal smoothed image should be composed of sharp structural boundaries between different objects and consistently flat components within homogeneous regions.Correspondingly, the gradient map of the ideal smoothed image should have the following two properties.Property 1.The gradients of pixels on structural edges should be non-zero, and gradients of pixels belonging to the same structural edge should be consistent.In Fig. 4(a), the left sub-image contains a structural edge segment for the squirrel.As shown in Fig. 4(b), along the structural edge, gradient amplitudes of most pixels are large, but some have globally lower gradient amplitudes. Property 2. The gradients of pixels in homogeneous regions should be zero.In Fig. 4(a), the right sub-image is a representative texture-detail region of a stone.As shown in Fig. 4(b), amplitudes of the gradients in this sub-image are oscillatory and can be seen as stochastic perturbations.Some pixels have even higher contrast compared to structural edges elsewhere in the image, while some have low gradient amplitudes. In order to make use of the above observed properties to guide the smoothing procedure, we need to quantify them by calculable measures.Consider the spatial distribution of non-zero gradients described in Property 1.One quantitative measure of the gradient map can be provided by a discrete counting scheme.It involves the L 0 norm, as suggested by Ref. [7]. Use of this first measure can avoid edge-blurring and can give impressive results with strong-contrast edges.However, Property 1 relies on the pixelwise gradient amplitudes of the input image to implicitly determine structural edges.Any textures and details will affect the edges in the input image, so it is unreliable.As shown in Fig. 4, for natural images, the gradient amplitudes of the input image cannot completely distinguish pixels on structural edges from pixels in homogeneous regions.Firstly, some parts of structural edges have comparatively low gradient amplitudes in the input image.Secondly, many fine-scale texture-like details have high contrast.Therefore, the model should not completely rely on pixel-wise gradients of the input image.To achieve better smoothing performance, the gradient amplitudes of the ideal smoothed image should be estimated carefully to better guide separating structural pixels from detail pixels.Based on this observation, we propose a new prior as the second measure to help estimate the ideal gradient map. Definition We now give a definition of our new model which subtly combines the above quantitative measures. New model.Assuming the ideal gradient map can be estimated well and is denoted by g = (g x , g y ), a smoothed image of high quality can be estimated by the following model: arg min where the first regularization term is the sparsity constraint on ∇u, and the second regularization term constraints ∇u to follow g.λ and β are regularization parameters.The new model involves a novel V norm, which is explained in detail next. As elements in ∇u = (∇ x u, ∇ y u) are binary tuples, it cannot be measured by traditional norms.We concisely denote the regularization terms by using a superscript V .The second term in Eq. ( 4) denotes the following formulation: where # represents the cardinality of a set: this term is the number of pixels with non-zero gradient amplitudes.The third term in Eq. ( 4) expands to The L 2 norm has been adopted in various image processing fields, and its ability to preserving fidelity has already been demonstrated.This definition ensures that the resulting gradient map ∇u tends to follow the accurate values in g. Nonlocal estimation of g.In practice, it is not possible to get the exact gradient map g of the ideal smoothed image as described in Eq. ( 4).However, the redundancy of natural images provides a natural way to estimate g.Because information is redundant in natural images, there are typically many repetitive or similar patches within a single image.In the ideal image, similar features should be removed or preserved in the same way, so gradients of similar patches should be consistent.Based on this observation, we propose a nonlocal gradient estimation method for g.Using this method, the gradient of a pixel can be constrained by nonlocally similar patches. This leads to the following model: arg min (5) where N L(•) represents a nonlocal estimation operation.Specifically, each pixel should have gradient consistent with the gradients of pixels with similar patterns.A patch centered at pixel u i is denoted by N I p .In a search window S around f i , a group of similar patches {N f i } can be collected.The nonlocal gradient estimate of u i can be expressed as where the weights are calculated in the Y channel of YCbCr color space as follows: and h acts as a parameter controlling the decay rate of the exponential function.By combining nonlocal estimation and L 0 gradient minimization in a single optimization model, our new model can achieve consistent detail removal and edge preserving effects. Explanation As the new model exploits information about nonlocally similar patches to constrain each pixel, it can preserve structure while removing details, to give consistent smoothed results.We now demonstrate the smoothing ability by analyzing in turn four representative patterns in natural images.Figure 5 illustrates four kinds of central patches and their corresponding similar groups. (a) Strong structural edges (patch A in Fig. 5).Both the gradient ∇f i and the gradients ∇f G are strong.During globally optimization, the gradient is kept and resulting edges are sharp. (b) Details in homogeneous regions (patch B in Fig. 5).In this situation, both the gradient ∇f i and the gradients ∇f G in the related patches are globally weak.In this case, the gradient of the current center pixel is modified to 0 by optimization, removing details.(c) Weak and slender edges (patch C in Fig. 5).The gradient ∇f i conflicts with the gradients ∇f G in the related patches.In the similar pacthes, the gradients are large, indicating that there is a structural edge.However, the gradient of the current patch is weak as it is a weak piece of a structural edge.In this case, the patch group promotes and enhances the weak edge, preserving it in the resulting image. (d) Fine-scale and high-contrast details (patch D in Fig. 5).As in (c), the gradients are conflicting.The gradients in the group have globally lowamplitudes, while the gradient of the current patch is large and is an outlier for a texture-like regions.After optimization, this outlier is removed. In summary, the new method can deal with both detail removal and structure preservation.The nonlocal constraint can effectively and automatically select important structures to preserve.Figure 6 illustrates a synthetic image with various kinds of noise.Figure 6(e) shows that our method can handle texture-like details (simulated by random salt and pepper noise) while at the same time keeping structural edges sharp. Numerical solution In this section we show how to numerically solve the nonlocal model defined by Eq. ( 5), give the whole algorithm, and discuss the parameters involved. Solver Energy optimization.As solving the original model is NP-hard, a splitting method which iteratively optimizes subproblems alternately [19] can be used as an effective technique.We give closedform solutions to the two subproblems respectively. Replacing the unknown gradient (∇ x u, ∇ y u) by auxiliary variables (b, d), the model in Eq. ( 5) can be made more flexible.The new model can be represented as min where This can be split into two subproblems which can be optimized iteratively.In each iteration, the following two steps are used. Step 1. Fix (b, d) and solve for u.The following subproblem can be extracted from Eq. ( 7): min 8) which has the optimality condition: where I denotes the identity matrix, and the symbol = − ∇ T x ∇ x + ∇ T y ∇ y denotes the Laplace operator. The solution to Eq. ( 9) is unique, but it is computationally complex to solve directly, requiring a large matrix inversion.Instead, Gauss-Seidel iteration can be used to solve it approximately; Fourier transforms are taken for further speed.Under periodic boundary conditions for the variable u, using a 2D discrete Fourier transform, we can diagonalize the Hessian matrix on the left hand side of Eq. ( 9), giving an explicit solution which only requires componentwise operations. Step 2. Fix u and solve for (b, d).The corresponding subproblem can be expressed as follows: The nonlocal estimation N L(b i ) is intractable when b i is unknown, and it is difficult perform optimization if N L(b i ) is treated as an unknown variable.An effective and practical way is to split this into two stages.Firstly, estimate the nonlocal variable N L(b i ) using the estimated image u from the previous iteration, i.e., replace N L(b i ) by N L(∇u i ).Secondly, estimate b i supposing N L(b i ) known. Furthermore, the above functional has the useful property that it can be split into |{f i }| individual subproblems.For any i, the corresponding subproblem can be formulated as optimizing the following energy functional with (b i , d i ) unknown.The objective functional for the i-th pixel can be written alternatively as where H(•) represents the Heaviside function, i.e., H(a) = 1 when a = 0 and H(a) = 0 otherwise.It is easy to see that when all subproblems {E i } are solved, the whole functional in Eq. ( 10) is optimized.Equation ( 11) involves a discrete counting function, and its solution is made tractable by using: Weight updating.In Step 2, the nonlocal estimation N L(∇u i ) involves calculation of weights w i,j as defined in Eq. ( 6).As iteration proceeds, the estimated image û will get closer to the ideal smoothed image, so the results of block matching will be more accurate.Therefore, in the k-th iteration, we update the nonlocally similar group using the latest updated image u k−1 , and recalculate the weights as In practice, block matching and weight updating can be performed every K 0 iterations for speed.Finally, the whole procedure for our method for image smoothing is summarized in Algorithm 1. Analysis of parameters In this section the various parameters involved are analyzed. Regularization parameters λ, β, γ.The parameter λ controls the degree of smoothing.In order to satisfy the requirements of the variable splitting scheme, the parameters β and γ should increase as iteration proceeds.When iteration stops, the two parameters should be large enough to guarantee the gradient map of the output image is close to the estimated gradients (b, d). From the update formula for (b, d) in Eq. ( 12), we can see that β and γ balance the impact of the nonlocal estimated gradient and the local gradient for the i-th pixel.If β > γ, the nonlocal estimation dominates the local gradient, and the result will be more consistent, and vice versa.The nonlocal method degenerates to the L 0 model if β = 0. Figure 7 shows an example using varying parameters.In each row, β/γ is fixed.From left to right, as λ increases, further details are eliminated and the images become coarser.Experiments show λ = 0.01− 0.04 is suitable in most cases. In each column, λ is fixed and the ratio β/γ adjusts the relative importance of nonlocal and local gradients for locating details and manipulating gradients.In the extreme case of β = 0, our model degenerates to L 0 smoothing, meaning only local pixel-wise gradients are considered.Comparing the first row to the others, we can see that small scale details are retained even for large λ.If β goes beyond γ, nonlocally estimated gradients will help to handle Comparison We now compare our method to a series of smoothing methods in this section.A good smoothing method should satisfy the following requirements: structural edges should be well-preserved without being oversmoothed or blurred, while details should be smoothed out completely.As pointed out earlier, our method degenerates to L 0 smoothing method if we set parameter β to 0. As shown in Fig. 8(a), the "small-house" input image is composed of explicit large-scale structures, including house and road boundaries, and highcontrast details on the roof, grass, and road.From Figs. 8(g)-8(k), we can see that structurepreserving methods can prevent edge blurring.However, some details still remain in the result for local extremal envelops as shown in Fig. 8(h).RTV can remove details, but the edges are not preserved naturally as shown in Fig. 8(i).Region covariance and tree filter methods cannot produce clear images as shown in Fig. 8(j) and Fig. 8(k), respectively.Compared to these methods, our method performs well in both edge preserving and detail removing.In Fig. 8(l), texture details are flattened naturally and structural edges are preserved without blurring.In summary, our method can consistently remove details and preserve structural edges. Applications As a fundamental technique, image smoothing has many applications for base and detail layer manipulation.As can be seen from the previous section, our smoothing method can preserve structural edges well while smoothing details out.In this section, we show some applications of our method including edge detection, edge manipulation, detail magnification, and content-aware resizing, to illustrate its advantage of retaining important features. Edge detection and manipulation Edge detection.There are rich details in natural images which can interfere with the progress of edge detection.Our method can remove trivial details, so it can help retain clean and accurate edges.As illustrated in Fig. 9, many fine edges are included in the original gradient map, while our smoothed result can produce a gradient map mainly containing meaningful edges.The edge map detected on our smoothed image by the Canny operator is much cleaner and contains fewer unimportant edges. Image abstraction.Image abstraction is a practical application given the developing demand for image editing tools for amateur users [20].Our method can serve as the abstracting tool.First the input image is smoothed as shown in Fig. 10(b).Then edges are detected in the smooth image.Finally, an enhanced version of the edge map is computed and added back to the smoothed image.This gives a new image with enhanced edges.An example is illustrated in Fig. 10(c).Pencil sketching.Pencil sketching is a useful image editing tool for generating non-photorealistic images [21].It can be accomplished in three steps: firstly, smooth the input image via our method, secondly, detect edges with a Canny operator, and thirdly, randomly select small edge segments with a fixed-length according to the gradient amplitude and add them to the extracted edges.The gray-level is proportional to the gradient amplitude, and the direction is the tangent direction of the edge.As shown in Fig. 10(d), the result is visually pleasing and reflects the main objects in the input image. Detail magnification Detail magnification aims to output an image with similar content but enhanced details relative to the input image.For a given image, first we can get two layers, a smooth layer and a detail layer, via our smoothing method.Then some enhancement operation is performed on the detail layer, e.g., using a difference of Gaussian (DoG) operator.Finally, the enhanced detail layer is composed with the smooth layer.In the composed image details are magnified. As illustrated in Fig. 11, details in the resulting image are much clearer than in the input image. Image resizing Seam carving [22] is a popular content based image resizing method.It aims to keep the shape of the most important objects during the resizing procedure.But textures and details which are common in natural images may also be considered as salient in the contrast-based measurement.Figure 12 illustrates two examples scaled to 0.6 of the size of the input images.From Fig. 12(b) we can see that the grass and wave are considered important, and the stone and sailing boat are distorted mistakenly by the original seam carving method [22].While our method can suppress the features of grass and wave, as shown in Fig. 12(c), seam carving on our output can effectively keep the shapes of the stone and the sailing boat. Conclusions This paper provides a novel image smoothing model based on a nonlocal consistency constraint on the gradient map.Nonlocal estimation provides a datadriven way to distinguish details from structural edges.The method works well for both detail removal in complex areas and feature preservation in less notable areas, and achieves more consistently smoothed results than previous methods.Moreover, as the new model can be solved efficiently by the algorithm described in this paper, it can be flexibly embedded into various techniques, and is able to help improve the performance of many content-aware applications. The limitation of our method is that it may fail for some mosaic images.In future we will explore proper estimation of the ideal gradient map especially for mosaic images. Fig. 1 Fig. 1 Image smoothing examples.Top: input images.Bottom: corresponding smoothed results using our method. Fig. 3 A Fig.3A scanline extracted from each image in Fig.2, as marked by arrows in Fig.2(a).The green channel is shown as it is representative of the main image content.Each blue curve represents the input signal, and the red curve represents the smoothed signal. was proposed in 2011 by Paris et al.It manipulates multi-scale details to give halo-free smoothing results.The advantage of this filter is its simplicity.As can be seen from Figs. 2(b)-2(d), the output neither fully preserves the really important edges, nor completely removes some meaningless textures. )where f and u are the vectorized input and output images respectively, and ∇u = (∇ x u, ∇ y u) denotes gradient.R(∇u) is a regularization term constructed from prior knowledge concerning the gradient map of the ideal smoothed image.The regularization parameter λ balances the smoothing regularization term R(∇u) with the fidelity term ||u − f || 2 2 to control the degree of smoothing. Fig. 4 Fig. 4 Gradient map of an example natural image and two representative sub-images.(a) Input image.(b) Visualized gradient amplitude map; amplitudes are normalized to [0, 1] and colorized according to the colormap on the right. Fig. 5 Fig. 5Four representative patches A, B, C, and D (red rectangles), and for each patch, several similar patches (yellow rectangles). Fig. 6 Fig. 6 Synthetic image example for image smoothing. Figures 8(b)-8(j) present results of other methods whose parameters have been tuned carefully using our best efforts.As shown in Figs.8(b)-8(e), in order to suppress details, edge-preserving methods, including the bilateral filter, WLS, guided filter, and local Laplacian filter, cause blurring on structural edges.Furthermore, details cannot be removed.Although L 0 smoothing can keep structural edges sharp, it cannot smooth out high-contrast details, as shown in Fig. 8(f). Fig. 12 Fig. 12 Seam carving.(a) Input.(b) Seam carving results on images in (a).(c) Seam carving results on images smoothed by our method.
2022-04-18T21:29:31.419Z
2015-08-14T00:00:00.000
{ "year": 2015, "sha1": "fdf647b724d3da25e4c690e110a403ecdd25ac73", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41095-015-0012-6.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "9c40aab0441bbbe2e83c6ec20bc78ec3f2a1085c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
9264024
pes2o/s2orc
v3-fos-license
Stochastic resonance and nonlinear response in a dissipative quantum two-state system We study the dynamics of a dissipative two-level, system driven by a monochromatic ac field, starting from the usual spin-boson Hamiltonian. The quantum Langevin equations for the spin variables are obtained. The amplitude of the coherent oscillations in the average position of the particle is studied in the high temperature limit. The system exhibits quantum stochastic resonance in qualitative agreement with earlier numerical results. I. INTRODUCTION Stochastic resonance is a nonlinear phenomena. It has been predicted and experimentally observed in damped classical systems [1][2][3][4]. It is characterized by a maximum in the response of the system to an external input signal as a function of noise strength at the input signal frequency. There have been some recent studies in stochastic resonance in damped quantum double-well systems as well [5][6][7][8][9]. Recently, Makarov and Makri have studied the phenomena in a dissipative two-level system numerically using an iterative path integral scheme and tried to understand the results analytically [8,9]. They demonstrate that it is possible to induce and maintain large amplitude coherent oscillations by exploiting the phenomena of stochastic resonance in quantum systems. In their treatment noise strength is varied, through the coupling to heat bath, to obtain stochastic resonance. The maximum in response is also obtained with respect to the driving field strength indicating breakdown of linear response theory. However, their analytical treatment involves several approximations. They start with the evolution of dynamical variables of a system using Heisenberg equation of motion for a given spin-boson (system-bath) Hamiltonian in the presence of a monochromatic field. In their weak coupling approximation the bath causes stochastic energy fluctuations which are unaffected by the dynamics of the two-level system . To obtain the steady state average position of the particle σ z they require steady state solution for the population difference between the two-level-system eigenstates σ x . The required expression for σ x is then taken from the solutions of the phenomenological optical Bloch equations in the rotating wave approximation. The expression for σ z thus obtained with these approximations seems to be in good agreement with their numerical simulations. In the present work we study stochastic resonance analytically by systematically deriving the quantum Langevin equation for the system by eliminating bath variables [10][11][12][13]. The response of the system to an oscillating field is obtained in the high temperature limit. This limit corresponds to treating the random force operator as a classical c-number variable. Our results are in qualitative agreement with earlier works. In section II the derivation of quantum Langevin equation is presented. The spin Bloch equations are obtained and the stationary solution of the relevant variables is given in sec. III. The last section IV is devoted to results and discussion. II. The Quantum Langevin equation for system variables To obtain the Quantum Langevin equations for system variables we consider a symmetric two-level system interacting linearly with a bath of harmonic oscillators in the presence of a time dependent external monochromatic field V 0 cos(ωt). The total Hamiltonian is given by [8,9] where σ 's are the Pauli matrices. a k and a † k are annihilation and creation operators for bath variables and g k is the coupling constant and 2s 0 is the distance between the two wells. The population difference between the two eigenstates of the two-level system, which are separated by 2h∆ 0 , is given by σ x . The average value of σ z represents the average position of the particle, σ z being +1 or -1 is equivalent to the particle in the right or the left well, respectively. The quantum equation of motion for any dynamical variable A can be obtained from the following evolution equation where [-, -] indicates commutator. The commutaton relations among the spin variables are given by and their cyclic permutations. Moreover, Using equations (1)-(4) one can readily write down the equations of motion for the variables as and, Equations (8) and (9) are linear and therefore can be explicitly integrated, to obtain Where a k (0) and a † k (0) are the bath operator values at the initial time t = 0. Substituting for a k (t) and a † k (t) from equations (10) and (11) in equations (5) and (6), we obtain the following Quantum Langevin equations for the system variables, Here F (t) is given by As the dynamical operators (a k (0) , a k † (0)) of the bath are distributed in accordance with the statistical equilibrium distribution for given temperature T , F (t) is referred to as Langevin operator noise term [12,13]. The integrals in equations (12) and (13) can be integrated by parts leading to In equations (15) and (16) ) represents a damping or memory kernel [10]. The last two terms in equation (15) and (16) are transient terms which can be neglected if one assumes Ohmic spectral density for bath oscillators [10,11],i.e., where α is a dimension less dissipation coefficient (or Kondo parameter). Notice that the form of the spectral density is not bounded from above and hence for physical reasons one introduces an upper cut-off frequency ω c , namely ρ(ω) = αωe −(ω/ωc) , such that the frequency scale ω c is assumed to be much larger than the characteristic frequencies of the problem. With this form of spectral density one can readily show that the transient terms do survive up to a time scale (1/ω c ), which can be made arbitrarily small and thus can be ignored [11]. Thus for the long time behaviour and for the Ohmic spectral density the equations (15) and (16) get further simplified and we arrive at the equations of motion for spin variables as with the memory kernel G(t − t ′ )=2αδ(t − t ′ ). Further simplifying equations (18)-(20) and making use of the properties of spin operators, we finally obtain the Langevin equations of motion for spin variables as The above Langevin equations involve the operator random force F (t). The statistical properties of F (t) can be obtained using the equilibrium distribution for bath variables, where β = 1/k B T . Using this and the Ohmic spectral density for the bath oscillators, the symmetrized autocorrelation for the operator valued random force F (t) is given by [12,13], and the nonequal time commutator is given by III. The Spin Bloch equation and their solutions. Owing to the operator nature of the random Langevin force F (t), it is difficult to solve for the expectation values of spin variables using equations (21)-(23). For simplification we make a first approximation in that the operator random force is treated as a classical c-number random variable. One can readily verify that in the classical limit [12,13], takinḡ h → 0, the nonequal time commutator of F (t) vanishes and the autocorrelation of the Gaussian random force F (t) becomes where η is the friction coefficient and is related to Kondo parameter α through the following realation η = (h/2s 2 0 )α. The classical Markov approximation for F (t) is valid in the high temperature limit, which will become clear later. With the above approximation one can readily write down the equations of motion of spin variables averaged over the ensemble of realizations of random fluctuations F (t), whose autocorrelation is given by equation (27), with the help of Novikov's theorem [14][15][16]. We get where δ = 2αk B T /h, and A = 2V 0 /h. Equations (29) to (31) represent the Bloch equations for spin variables. Unlike in the standard NMR situation, there is no relaxation term in the evolution of σ z 0 . This is because fluctuating environmental fields are exclusively along the z direction [10]. These equations are characterized by a single relaxation time τ = δ −1 = h/2αk B T . In the absence of an external driving field, the system will relax asymptotically to equilibrium and the equilibrium value of population difference between the two levels separated by an energy value 2h∆ 0 is given by σ This shows that our approximation of treating the operator random force as classical random c-number variable is valid for high temperature such thath∆ 0 /k B T ≪ 1. The stationary solution of Bloch equations in the presence of an external field can be found by using the method of harmonic balance. For this we assume stationary solutions for σ z s and σ x s to have the form where a,b,y, c and d are constants to be determined. Substituting (32) and (33) in the Bloch equations (29) to (31) and using harmonic balance one readily gets an expression for the required steady state amplitude σ z 0 for the average position of the particle σ z as Here in the above expression we have rescaled all the energy variables in terms of k B T , i.e., and ω 1 =hω/k B T are dimensionless variables. Note that in the absence of driving field the amplitude σ z 0 vanishes as expected. Away from the resonance (ω = 2∆ 0 ), the oscillation amplitude for high frequency field scales as (1/ω 2 ). In the limit of low frequency (static) driving ω → 0 and for small V 0 , i.e., in the adiabatic limit σ z 0 is independent of relaxation time τ (=h/2αk B T ). These results are consistent with those obtained in ref. [8,9]. IV. Results and discussion In fig.(1) we have plotted the stationary amplitude of the average position of the particle σ z 0 as a function of the dissipation coefficient (or Kondo parameter) α, for various values of the dimensionless field amplitude V 1 (≡ V 0 /k B T ). We have restricted to the case of resonant condition ω = 2∆ 0 . The solid curve is for V 1 = 0.01, long dashed curve for V 1 = 0.04 and small dashed curve for V 1 = 0.07. We see that all these curves exhibit maxima at an optimum value of α, which depends on ∆ 1 , V 1 and ω 1 . The maximum value of the peak (M p ), however, is independent of field strength and is given by M p = √ 2(∆ 2 1 /ω 1 ), but the position of the peak (α m = 2V 2 1 ω 2 1 − (ω 2 1 − 4∆ 2 1 ) 2 /2ω 1 ) shifts towards higher values of α as we increase V 1 . The independence of the value of peak maxima on the field strength is also noted in reference [8,9], for resonant case. Our expression for M p is valid for any frequency. The occurrence of the peak or the maximum in the σ z 0 as a function of coupling strength α is attributed to stochastic resonance in quantum two-level systems. This is a result of cooperative phenomena between various competing mechanisms of energy exchange between the two-state system with thermal bath and the external driving field. In fig.(2) and fig.(3) we have plotted σ z 0 versus ω 1 (≡hω/k B T ). For this we have This indicates stochastic resonance can be obtained even for the off resonance conditions. This further indicates that the stochastic resonance is indeed a bonafied resonance [17,18]. In fig(4) we have plotted σ z 0 as a function of external driving field amplitude V 1 (≡ and (0.06), respectively. We notice that the response of the system, i.e., σ z 0 , initially increases with the field amplitude and attains a maximum value at a particular value of V 1 depending on α and on other parameters. After exhibiting the maximum σ z 0 decreases as we increase V 1 further. The maximum value (M p ) of the peak in the σ z 0 for a given value of ∆ 1 , ω 1 and α occurs at the field amplitude value and is equal ω 1 ) which is independent of α, the Kondo parameter. The linear response theory is valid for small V 1 to the left side regime of the maximum. In these plots stochastic resonance manifests as a breakdown of linear response theory, thus bringing out the non linear nature of the problem explicitly. In fig.(5) we have plotted σ z 0 versus ∆ 1 (≡h∆ 0 /k B T ), for given ω 1 (≡hω/k B T )=0.4 and a small α=0.04. The maximum appears well within the range of acceptable values of ∆ 1 ≪ 1, i.e.,h∆ 0 ≪ k B T . For very small values of α we do get maximum response at two values of ∆ 1 ( = 2ω 1 ) indicating stochastic resonance at off resonance condition (similar to the observation made in fig. (3)). In conclusion we have derived quantum Langevin equation for a dissipative two-level system, driven by monochromatic ac field, starting from microscopic spin-boson Hamiltonian. The equations of motion for the average values of the spin variables are obtained in the high temperature limit. We have obtained an analytical expression for the amplitude of coherent oscillation in the average position of the particle, which exhibits stochastic resonance with respect to various parameters in the problem. These results are in qualitative agreement with the results obtained by Makarov and Makri using the numerical method of iterative path integral scheme [8,9].
2014-10-01T00:00:00.000Z
1997-04-15T00:00:00.000
{ "year": 1997, "sha1": "429c6460601835c665d1c3e5cc71b5020c636618", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9704243", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "25ab9d0ca9d6a54b0218dc94c2498714e3eb328d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256697394
pes2o/s2orc
v3-fos-license
Analysis of atomic-clock data to constrain variations of fundamental constants We present a new framework to study the time variation of fundamental constants in a model-independent way. Model independence implies more free parameters than assumed in previous studies. Using data from atomic clocks based on $^{87}$Sr, $^{171}$Yb$^+$ and $^{133}$Cs, we set bounds on parameters controlling the variation of the fine-structure constant, $\alpha$, and the electron-to-proton mass ratio, $\mu$. We consider variations on timescales ranging from a minute to almost a day. In addition, we use our results to derive some of the tightest limits to date on the parameter space of models of ultralight dark matter and axion-like particles. Introduction The standard model and general relativity are the currently accepted theories of elementary particles and gravity.Their predictions are largely controlled by a set of free parameters known as fundamental constants, which are extracted experimentally and assumed independent of time and spatial position.The underlying origins and potential spacetime variability of the fundamental constants have been rich subjects of investigation, dating back to Dirac's large numbers hypothesis [1,2]. In the years since, dynamical mechanisms, for example from string theory, have been suggested to explain the constants' origins.In some regimes, additional scalar fields imply spacetime variations [3][4][5][6][7].More generally, many realistic models give rise to variations of fundamental constants (comprehensive reviews may be found in, e.g., Refs.[8,9]).Models based on quantum field theory, such as Bekenstein [10] or Barrow [11,12] models can describe fundamental-constant variations in terms of low-energy effective interactions of additional scalar fields coupled to the standard model.More recently, variations of the dimensionless constants due to so-called ultralight fields, which could be related to dark matter [13][14][15], have revived significant attention in this idea (for a recent review see, e.g., [16]).Investigating the variability of fundamental constants is strongly tied to the foundational assumptions and outstanding problems in modern physics.This work reports on new developments in studying temporal variations of the dimensionless fundamental constants.In Sec. 2, a generic approach based on effective field theory, applicable particularly to the analysis of temporal variations, is described.The explicit spacetime dependence of a generic dimensionless constant is represented by a series expansion of a scalar field normalized to the energy scale of new physics responsible for the time variation.A description of scalar-field evolution, including arbitrary damping effects, is given, along with the number of observable parameters.This setup covers a broad range of models describing temporal variations in the literature.It also has significant consequences for the interpretations of measurements, demonstrating that there are more free parameters than typically expected. Many previous experiments have used the extreme precision of atomic clocks to investigate time-variations of fundamental constants.Examples include constraints placed on linear-in-time variations [17][18][19][20], oscillations [21][22][23][24][25] and transients [26,27].For cases where the experimental data has been interpreted to place bounds on theoretical parameters such as coupling constants, specific models for the scalar fields and their dynamics have always been assumed.Our approach in this paper, however, allows constraints on time-variation to be presented in a model-independent way, without assuming the functional form of the scalar field.The model-independent bounds can be interpreted in terms of specific models due to the framework presented in Sec. 2, where we work with the most generic functional form for the scalar field and effective field theory methods. In Sec. 3, measurements are presented from atomic clocks ( 87 Sr, 171 Yb + optical and 133 Cs microwave) at the National Physical Laboratory (NPL), assessing the temporal variability (stability) of the fine-structure constant, α, and the electron-to-proton mass ratio, µ = m e /m p , over a period of about two weeks.The frequency ratios between 171 Yb + , 87 Sr and 133 Cs clock transitions place constraints on oscillations in α and µ at and beyond the previous state-of-the-art. Using these data, in particular the 171 Yb + / 87 Sr and 87 Sr/ 133 Cs ratios, in Sec. 4 modelindependent constraints are placed for the first time on low-dimensional couplings of an ultralight scalar field to matter.In Sec. 5, new constraints are extracted and compared with previous results for the special cases of ultralight scalar and axion-like dark matter.The prospects of future measurement campaigns are discussed in Sec. 6. Field theory description of varying constants In this section we introduce our generic framework to describe the spacetime variation of fundamental constants.This framework relies on the concept of effective field theory methods (see e.g.[28]).It is a tool of quantum field theory that has been extensively applied to many areas including particle, nuclear, atomic, condensed matter and gravitational physics.The significance of this approach is that it enables calculations and predictions, which can be tested in experiments, that are generic and universal. Describing the spacetime variation of a fundamental coupling constant g(t, ⃗ x) can be accomplished by promoting the constant to a series expansion involving a scalar field ϕ(t, ⃗ x): g(t, ⃗ x) = g 0 + 1 Λ ϕ(t, ⃗ x) + . . ., where g 0 is the spacetime-independent contribution and Λ is the high-energy scale relevant to the onset of the physics responsible for the spacetime variation of constants.Coupling ϕ(t, ⃗ x) to conventional matter otherwise described by a coupling g 0 is thus accommodated by replacing g 0 → g(ϕ). In this paper, we are primarily interested in time-varying fundamental constants and are thus considering only time-dependent scalar fields.The most generic field equation for a scalar field corresponds to a damped harmonic oscillator.From an effective field theory point of view, this is the field equation that corresponds to dimension-four operators: the kinetic term for the scalar field and potential interactions leading to a decay of the scalar field which can be parametrized by operators of dimensions 3 and 4. The equation of motion for a damped harmonic oscillator is given by where ϕ(t) is a time-dependent scalar field, m is the mass and Γ is a damping factor. The solution to the field equation depends on the boundary conditions.For oscillatory solutions, the classes of behavior are identified by the relation between m and Γ: where ϕ 0 , ϕ 0,1 , and ϕ 0,2 are amplitudes, θ is a phase and Standard effective field theory methods can be used to describe the interactions of ϕ to conventional matter in a general manner.For example, the interactions of the scalar field with the photon field A µ and the electron field ψ e can be described by the Lagrangian: for positive integer n, where κ = √ 4πG = 1/( √ 2M P ) with G being the Newtonian constant of gravitation, M P being the reduced Planck mass, and Note that we normalize the interactions to the strength of the gravitational interaction, which is the weakest interaction in nature known to date.However, the dimensionless couplings d me control, respectively, the strength of interactions between ϕ and the photon and the electron.They may be taken as real numbers, and parameterize the magnitude of the time variation of the fine-structure constant and of the electron mass.The definition used in Eq. ( 6) sets the high-energy scale Λ from Eq. ( 1) such that κ n d are the combinations of parameters that determine the couplings of the scalar field to the different matter components. The linear scalar-field coupling (n = 1) and quadratic scalar-field coupling (n = 2) are the most relevant as they are the lowest order effective operators and thus their effects are expected to be the strongest.Interactions with n ≥ 3 are suppressed by more powers of the scale of the physics responsible for the time variation and thus very much suppressed if this is a high-energy scale of the order of the Planck scale, for example. Note that there are a number of free parameters that need to be fitted to data: γ ).We can use this fully generic approach to calculate the shift of the fine-structure constant α due to the scalar field ϕ(t) and obtain in the linear case (n = 1) In the physically relevant underdamped regime, we obtain In the quadratic case (n = 2), we have which leads in the underdamped regime to In all cases, we have five independent parameters: the coupling constants to matter d γ , an amplitude ϕ 0 , a damping factor Γ, an oscillation frequency ω d and a phase θ.Note that they may not all be measurable independently.A measurement of a change of α is only sensitive to the product d Similarly, the coupling of ϕ to gluons is controlled by the coupling d (n) g and that to quarks is controlled by the quark coupling constant d (n) m f .These free parameters control the degree of time variation of the proton mass and of quark masses.We note that the vacuum energy due to gluons, which corresponds to the Quantum Chromodynamics (QCD)-scale Λ QCD , accounts for approximately 90% of the nucleon mass M N .On the other hand, light-quark masses m u , m d and m s account for a mere ∼10% of the nucleon masses [29].The proton mass is thus much more sensitive to a time variation of Λ QCD than it is to a variation of the light quark masses if all of these parameters vary with time.For more details on the QCD couplings, please see Appendix A. The main motivation for this methodology is that a broad range of models is encompassed by this effective field theory approach.By providing bounds on a time variation of fundamental constants using this formalism, bounds for specific models can easily be obtained and our results can then be interpreted in specific models, for example: • Quintessence-like models, see, e.g., [30][31][32]: in that case Γ = 3H where H is the Hubble parameter, but because today H ∼ 1/t today (where t today is the age of the universe), the damping is irrelevant today.Note that for quintessence models, the mass of the scalar field is of order H ∼ 10 −33 eV and there is some tension with torsion pendulum experiments (Eöt-Wash see, e.g., [33]) as these experiments exclude new bosons with masses lighter than 10 −2 eV for d j ∼ 1, one needs to consider models with d • Ultralight dark matter [13][14][15]: we must assume Γ = 0 as dark matter is stable.As described in detail in Sec. 5, if the scalar field with mass m accounts for all of dark matter, we can relate ϕ 0 to the local density of dark matter ρ DM : ϕ 0 ≈ √ 2ρ DM /m.Note that we could have a multi-component dark matter sector in which case the relation between ϕ 0 and ρ DM doesn't hold.This is further discussed in Sec. 5. • The scalar field could be a generic scalar from some hidden sector [34].In that case, the amplitude is a free parameter and Γ ̸ = 0 if it can decay today. • Kaluza-Klein and moduli models: in models with extra dimensions the sizes of compactified extra dimensions can be described by scalar fields (the moduli fields).If the size of extra dimensions changes with cosmological time, we could have a time evolution of these scalar fields.In particular, in string theory, all coupling constants are determined by the expectation values of moduli fields.Coupling constants could thus easily depend on time [3][4][5]. • Dilaton fields, see, e.g., [35][36][37]: they are similar to moduli, but we expect them to couple universally to matter like gravity does.These models include Brans-Dicke type fields and also scalar fields that are coupled non-minimally to the curvature scalar e.g.ϕ 2 R, where R is the Ricci scalar. • Vacuum evolution models: in some extensions of the Standard Model, the vacuum expectation value of the Higgs boson can evolve with time [38,39].As the vacuum expectation value of the Higgs boson fixes the weak scale, it will lead to a time variation of all fermion masses as well as that of the electroweak bosons masses.This is typical of inflation-type models. • Test of grand unified theories [3,[46][47][48][49]: in grand unified models, time shifts in α and the strong coupling constant α s (equivalently Λ QCD ) are related.If time variations of both α and µ were observed, predictions of grand unified theories could be directly probed.The same observation applies to shifts in lepton and quark masses.Because the relations between quark and lepton masses are very model dependent, clocks could help to determine the correct unification theory using very low energy data without the need to produce super massive particles. We thus see that our generic theoretical framework enables us to study a very wide variety of models and also to test different scenarios of ultra-high-energy physics with very lowenergy experiments.This is obviously only a subset of the models that can be studied with these methods.Note that our approach enables us to describe both fundamental variations of constants as originally envisaged by Dirac, but also the effective time evolution of constants where the variation is due to interactions with additional fields, as in the case of, for example, ultralight dark matter.We point out that this framework can also be applied to massive spin-1 and spin-2 fields with only minor modifications in terms of the way these higher spin fields couple to matter.It should also be emphasized that while we looked at a damped oscillator model, there are important other classes of models that could have been considered.Depending on the boundary conditions, other solutions are possible, e.g., soliton models, transient phenomena, cosmic strings, domain wall, kink solution, etc.Great care needs be taken when interpreting data as new physics signals could easily get lost when a specific functional form of the signal is assumed. Note that for infinitesimal time differences between two measurements of the fundamental constants, we can consider the first-order shifts For the data analysis performed in the next section, it is useful to consider fractional frequency shifts in atomic clocks δν/ν, where ν is the frequency of an atomic transition.Since these fractional frequencies depend on at least one fundamental constant, new physics in the form of g(ϕ) altering spectral widths could be imparted on laboratory measurements.As observables involve comparisons with a reference, in this context fractional frequency ratios are considered.These ratios are related to a linear combination of relative fundamental constant variations, where K g i is a sensitivity coefficient particular to the system of interest [50][51][52].The frequency of optical and microwave transitions can be expressed as where A and B are constants specific to a given transition, F opt (α) and F MW (α) are relativistic corrections and g N is the nuclear g-factor, c is the speed of light and R ∞ is the Rydberg constant.This work focuses on optical-to-optical Yb + /Sr and optical-to-microwave Sr/Cs ratios.The former are sensitive to α variations whereas the latter are sensitive to α, µ and g N variations. 4We now turn out attention to the data analysis. 3 Data and analysis framework Experimental Setup In this work we analyze frequency ratio data produced by atomic clocks based on neutral strontium atoms in a lattice trap (Sr) [55], a singly-charged ytterbium ion in a Paul trap (Yb + ) [56], and neutral cesium atoms launched in a fountain configuration (Cs) [57].The properties of the atomic and ionic energy transitions are summarized below in Table 1. Table 1: Summary of the atomic and ionic energy transitions used to produce high-stability frequency data.K X values are taken from references [51,[58][59][60][61] and given to two decimal places. The Sr, Yb + , and Cs clock frequencies are all measured relative to an active hydrogen maser (HM).For the optical clock frequencies from Sr and Yb + , this measurement is made via a frequency comb, referenced to the 10 MHz output of the maser.The microwave clock frequency from Cs, however, can be measured directly against the maser.As all the measurements are made during the same observation window, the frequency ratios Yb + /Sr, Yb + /Cs and Sr/Cs can be calculated in post-processing and are independent of the maser frequency.The achieved stabilities in the Yb + /Sr, Yb + /Cs and Sr/Cs frequency ratios are all close to the quantum projection noise limits, determined by the number of atoms interrogated, the clock cycle times and the quality of the local master oscillator [62].Fig. 1 displays the time series of data used in this work, plotted as fractional frequency ratios: with reference ratios, HM taken to be 10 MHz.The Sr and Cs data were available between 1 st -14 th July 2019, with total uptimes of 73% and 93% respectively; Yb + data were available between 1 st -6 th July 2019, with a total uptime of 76%.Fractional frequency ratio of Cs/HM data produced by NPL's cesium fountain, NPL-CsF2, with data pre-processed over 600 s intervals.Data were collected between 1 st -14 th July 2019 (MJD 58665 -58679). Experimental Results: Frequency Ratio Instability In the presence of general noise processes, the mean and standard deviation of data sets are not guaranteed to converge as the number of samples increases.It is therefore common practice to estimate the spread of r [i/j] over different averaging times, τ , using the Allan Deviation, σ r (τ ), and its extensions.These are more informative estimators of spread, as they converge for data sets exhibiting the most common kinds of non-stationary statistics [64,65].Specifically in this work, we characterise the instability of our data using the Modified Allan Deviation (MDEV).The MDEV is given by the square root of the Modified Allan Variance, which is defined for a data set of M measurements, y k , averaged over averaging time, τ , as where m is the averaging factor, such that τ = mτ 0 , and τ 0 is the original sampling time interval of the data points [65].As our frequency data were measured on a frequency counter configured in a Λ-counting mode, the MDEV was the appropriate statistic to characterise the instability [66][67][68].To directly compare instability estimates from different types of Allan deviation, the estimators may be converted between each other using the relations documented in [69]. Fig. 2 displays the values of σ r (τ ) calculated at octave intervals of averaging time.For 60 s ≤ τ ≤ 30 000 s, the σ r[Yb + /Sr] curve is well approximated by power-law behavior of σ r[Yb + /Sr] (τ ) ≈ 1.4 × 10 −15 / τ /s.This indicates that the data are well described on these timescales by a white frequency modulation (WFM) noise process with h 0 ≈ 8.3×10 −30 Hz −1 , where h 0 is a constant value of power spectral density [64,65].This is not true for τ < 60 s, because this is shorter than the time constant for steering the Yb + clock laser onto the atomic transition frequency.At these shorter timescales, correlated noise from the clock laser dominates the instability.For averaging times 600 s ≤ τ ≤ 80 000 s, the instabilities of Sr/Cs and Yb + /Cs are well approximated by σ r[Sr/Cs] ≈ σ r[Yb + /Cs] ≈ 1.6 × 10 −13 / τ /s, corresponding to WFM noise with h 0 ≈ 1.0 × 10 −25 Hz −1 .The small difference in instability between the Yb + /Cs and Sr/Cs ratios can be attributed to the fact that the two sets of data span different time periods with different uptimes.While σ r[Sr/Cs] appears to increase at the highest averaging time, we attribute this to error introduced by the routine used to calculate MDEVs, which interpolates gaps in the data record: when downtime is dominated by a small number of large gaps in the data record (as is the case here), this interpolation can introduce spurious drifts and inflate the instability estimate at the largest averaging times [70,71].Therefore we do not consider averaging times above τ = 80 000 s. Some publications [22] have attempted to constrain variations in fundamental constants using frequency data from hydrogen masers (HM), reasoning that ν HM shares the K X sensitivities of the 2 S 1/2 (F=0 − F=1) hyperfine transition in atomic hydrogen to which the maser cavity is tuned.We do not follow this approach in this work, as we cannot confirm over which timescales this reasoning holds true in our commercial maser system; the maser wall shift, the resonant cavity, the voltage-controlled crystal oscillator, etc. all contribute to the instability of ν HM over certain timescales and introduce sensitivity to additional variables, e.g.temperature [72].Consequently, we do not use optical-to-maser ratio data (Sr/HM or Yb + /HM) directly in this work to place constraints on the variations in fundamental constants.Instead, we use optical-to-cesium ratio data, despite the higher WFM instability in the data sets, as we are confident in the sensitivities of ν Cs to variations in the fundamental constants. On timescales for which the instability of the frequency ratio data is dominated by the behavior of the atomic transitions, we place constraints on fundamental constants using Eq. ( 12) and K X values taken from Table 1.Due to the negligible sizes of ∆K Yb + /Sr µ and ∆K Yb + /Sr q , we assume that the instability in fractional changes in α to leading order must be less than σ r[Yb + /Sr] scaled by the magnitude of the sensitivity ∆K 01.Thus, we constrain the instability of fractional changes in α to be σ (∆α/α) = σ r[Yb + /Sr] /6.01 ≤ 2.3 × 10 −16 / τ /s on timescales of 60 s ≤ τ ≤ 30 000 s.With σ (∆α/α) constrained to two orders of magnitude below the noise level of Sr/Cs and Yb + /Cs, we make the further assumption that any remaining instability in r [Sr/Cs] would be dominated by ∆µ/µ rather than ∆m q /m q , due to the small size of ∆K .00.Under this assumption, to leading order we may similarly constrain the instability of fractional changes in µ to be no greater than σ r[Sr/Cs] scaled by the magnitude of the sensitivity ∆K Sr/Cs µ = 1.00.Thus, we constrain the instability of fractional changes in µ to be σ (∆µ/µ) = σ r[Sr/Cs] /1.00 ≤ 1.6 × 10 −13 / τ /s on timescales of 600 s ≤ τ ≤ 80 000 s.These constraints on the instability of fractional changes in α and µ as a function of averaging times are shown in Fig. 3 and summarized in Table 2. These estimates of fractional variations in frequency ratios, and hence α and µ, on different timescales make no assumptions about the functional form of the variations.These results can be translated into model-independent limits, which will be discussed in Sec. 4. Experimental Results: Sinusoidal Oscillations If one chooses to focus specifically on oscillatory time variations of fundamental constants, these can be generically described by a damped harmonic oscillator, given in Eq. ( 2).Constraints on damped oscillatory signals could be obtained for a range of oscillation frequencies, f , by finding the best fit amplitude of each oscillation frequency, A(f ), and the best fit to the damping factor, Γ.However, it was decided as a first stage of analysis to fit undamped oscillations (Γ = 0) to reduce the number of parameters that require fitting.If any significant signals or features were detected as a result of fitting to undamped oscillations, reasonable values of Γ could be inferred from the linewidths of any peaks, and further analysis could be performed to fit for Γ and constrain damped oscillations to the data.In the case that significant features were observed, we implemented a routine to fit the data to oscillations weighted by an envelope function, Z(t), which for Z(t) = exp(−Γt/2) would model underdamped oscillations [73] 5 .During testing it was observed that this routine did not perform better than standard periodograms for detecting damped oscillations in the parameter space of interest, so it was decided to use a standard periodogram. Similar to the approach taken in recent works [22,23,25,74], we constrain the magnitude of undamped oscillations in our data by estimating the power spectral density of the fractional frequency ratios, S r (f ), via the Lomb-Scargle periodogram (LSP) [75,76].The LSP is an estimator of S r (f ) for time series that suffer from irregular sampling or data gaps due to experiment downtime [77].Calculating the LSP is equivalent to performing linear leastsquares fits of a data set to the amplitude of sinusoids at a range of frequencies; it allows algorithms in the style of a fast Fourier transform to be used on time series with incomplete or irregular sampling, without having to account for data gaps by deconvolving the time series with composite window functions [76,78]. Power spectral density estimates for the fractional frequency ratios, S r (f ), were calculated using the implementation of the LSP [75,76] provided in the Astropy Python package [77].The Nyquist frequency, f Ny = (2∆t sample ) −1 , was chosen as the bandwidth upper limit for ease of comparison with previous works, and any future works that may have 100% data uptime.However, S r (f ) is not guaranteed to be free of aliasing for f < f Ny when signals are irregularly sampled, as gaps in data sampling introduce spectral leakage below f Ny [75].The resolution of the frequency grid was not tuned beyond using the default oversampling factor of n 0 = 5 [77].The fidelity of LSP in detecting oscillations in noisy data was validated by injecting sinusoidal signals into data sets with similar noise statistics as those of the observed data. Fig. 4 shows S r (f ) for each frequency ratio, with black dashed lines indicating the estimated noise levels, h 0 , calculated from values of σ r (τ ) [69].Here it can be seen again that while r [Sr/Cs] and r [Yb + /Cs] are well described by WFM noise, this is not true of r [Yb + /Sr] for f ≥ (60 s) −1 = 1.67 × 10 −2 Hz, where more complex power-law behaviour can be seen: the action of the servos steering the probe light frequency onto the atomic/ionic transition frequencies leads to an approximate S r (f ) ∝ f −1.5 behavior.Limits on oscillations in ∆α/α 5 Since our longest data set had a length of T = 14 days and our minimum sampling time was ∆t = 60 s, the approximate range of Γ detectable with our data was between Γ min ∼ 1/T ≈ 8.3 × 10 −7 Hz and Γ max ∼ 1/∆t ≈ 1.7 × 10 −2 Hz.The range of detectable values for Γ corresponds to a range of lifetimes between 60 s to 1.2 × 10 6 s, using τ * = Γ −1 .It is interesting to compare this range to lifetimes corresponding to known forces of nature.Typical lifetimes in Quantum Electrodynamics are of the order of 10 −20 s to 10 −16 s.For the weak interactions, one finds 10 −13 s to 10 3 s while for the strong force one finds 10 −23 s to 10 −20 s.Thus the range of detectable lifetimes in this work partly overlaps with lifetimes typical of the weak force.The levels of white frequency modulation (WFM) noise, h i 0 , are taken from the best fits to the σ r (τ ) curves presented in Fig. 2. in the frequency range that exhibits WFM noise can be formed by integrating S r[Yb + /Sr] (f ) over the nominal frequency bin width (δf [Yb + /Sr] = 1/T [Yb + /Sr] = 2.5 × 10 −6 Hz) to obtain the total power of each bin, then taking the square-root to obtain the one-sided 6 fractional amplitude spectrum, A r[Yb + /Sr] (f ). Constraints on oscillations in α and µ can be extracted from the fractional amplitude spectra A α (f ) and A µ (f ) by considering the statistics of the LSP estimator.In the absence of prominent peaks, some authors [23] have placed constraints at the observed power for each frequency bin of the spectra.Following the approach taken in [21,22,25,74] and others, confidence intervals on the constraints at each frequency bin were calculated by simulating a large number of control spectra with equivalent noise statistics to the observed spectra, then using the empirical cumulative distribution function of the simulated power in each frequency bin to estimate confidence intervals, in this case, at 95% confidence.Whilst we follow this approach to calculate 95% confidence intervals on our power estimates, using these confidence intervals as "limits" has the disadvantage that the bounds placed on a WFM process could differ across neighboring frequency bins by an order of magnitude or more, and would likely fluctuate between repeat experiments even if the noise level in the experiment remained constant. We believe a more appropriate and reproducible bound for a WFM spectrum is one that is constant across all frequencies: under the null hypothesis that the power differences across frequency bins are merely the result of fluctuations due to a WFM noise process, we produce a global bound based on an estimate of the white noise level, with exclusion limits estimated with false alarm levels, A X (p ≤ p 0 ).These false alarm levels represent the value of A X (f ) that would be exceeded with a probability of no more than p 0 across all frequencies in the case of white noise only.While analytic expressions exist for the distribution of S r (f ) for regularly sampled data (assuming uncorrelated Gaussian errors) [76,79], in this work we employ computational methods, as recommended in [77]. Simulating data sets with the same noise and data gaps as the observed spectra, we use the bootstrap method [80] to estimate bounds at the 68% significance level of A α (p ≤ 0.32) = 5.6 × 10 −18 and A µ (p ≤ 0.32) = 1.3 × 10 −15 .More appropriate to particle physics would be the equivalents of a 5σ significance level, which were estimated using the Baluev method [81], which yield A α (p ≤ 3.5 × 10 −7 ) ≈ 8.9 × 10 −18 and A µ (p ≤ 3.5 × 10 −7 ) ≈ 2.1 × 10 −15 7 .As shown in Fig. 5, all spectral peaks fall well below this fractional amplitude, though it should be admitted that this is a fairly strict detection threshold.Though global bounds across the entire frequency domain of the LSP estimate are more conservative than bounds one could achieve by setting constraints on individual frequency bins, for the reasons outlined above, we believe them to be better motivated and more legitimate for processes that appear to be predominantly WFM noise. With only two peaks slightly exceeding the 1σ false alarm level for A µ (f ), and no spectral peaks exceeding the 5σ false alarm levels for either A µ (f ) or A α (f ), we have high confidence that all peaks in A α (f ) and A µ (f ) are consistent with instabilities due to WFM noise processes.Therefore, based on the estimated WFM noise levels of fractional frequency ratio data, we place constraints on sinusoidal oscillations in α at the 1σ significance level of A α (f ) ≤ 5.6 × 10 −18 and at the 5σ significance level of A α (f ) ≲ 8.9 × 10 −18 , over the frequency range 2.5 × 10 −6 Hz ≤ f ≤ 1.7 × 10 −2 Hz.We also place constraints on sinusoidal oscillations in µ at the 1σ significance level of A µ (f ) ≤ 1.3 × 10 −15 and at the 5σ significance level of A µ (f ) ≲ 2.1 × 10 −15 , over the frequency range 8.7 × 10 −7 Hz ≤ f ≤ 8.3 × 10 −4 Hz.A summary of these 1σ and 5σ constraints is shown in Table 3. Amplitude Constraint Amplitude Constraint Parameter space (1σ significance) (5σ significance) Table 3: Summary of constraints on the powers of fractional oscillations in α and µ produced in this paper, expressed as Fourier spectrum amplitude detection thresholds, A X (f ), at the 1σ and 5σ significance level. These constraints on sinusoidal oscillations can be further interpreted in the case of specific models such as scalar dark matter, which is the focus of Section 5. Model-independent constraints As emphasized before, the main strength of our new theoretical description of a time evolution of constants is that it enables us to set model-independent constraints on the time variation of these constants.Independent of the shape of the function that describes the time variation, we now set some limits which can then trivially be interpreted in specific models. From Eq. ( 13), a transition frequency ν may be parametrized as [82] where K α , K µ , and K q are sensitivity coefficients characteristic of the transition ν and m q ≡ (m u + m d )/2.The quark coefficient parametrizes changes in the nucleon mass δM N /M N = k M N q (δm q /m q ) and the nuclear magnetic moment δg N /g N = k g N q (δm q /m q ) in terms of m q variations.The ratio of two frequencies r = ν 1 /ν 2 is independent of the dimensionful constants and varying Eq. ( 16) with respect to α, m e /Λ QCD , and m q /Λ QCD noting (11) gives where q is the mass-weighted mean-quark coupling.Using the published values of K Yb + α = −5.95,K Sr α = 0.06, and K Cs α = 2.83 from Ref. [52] and K Cs µ = 1, K Cs q ≈ 0.07 from Ref. [82] (and summarized in Tab. 1) results in the fractional frequency ratios As can be observed in Fig. 2, constraints from Yb + /Sr measurements enable a bound on the coefficient d far below what is possible from Sr/Cs measurements.It is therefore appropriate to neglect d (n) γ in Eq. ( 19), which gives an effective coupling for the Sr/Cs measurements of Comparing Eqs. ( 18) and ( 19) with the data in Fig. 2, model-independent constraints can be placed on the magnitudes of the coupling strengths d Sr/Cs , and the instability of ϕ n (t) over different timescales, σ ϕ n (τ ) given by for timescales 60 s ≤ τ ≤ 30 000 s, and for timescales 600 s ≤ τ ≤ 80 000 s, where σ ϕ n (τ ) is the Modified Allan Deviation of ϕ n , defined in Eq. ( 15), and the contribution from (d g ) is assumed to be subdominant.Note that these constraints do not assume any specific fundamental physics model, nor do they make any assumption about the functional form of ϕ n (t).The constraints in the equations above are only valid for the specified values of τ explored in this work, and cannot constrain fluctuations on timescales outside this range. The constraints on σ ϕ n (τ ) in Eqs. ( 21) and ( 22) can be roughly interpreted as limits on the average magnitude of fluctuations in ϕ n (t) between any two points in time t and (t + τ ).For example, for two times separated by τ = 1 000 s, the fluctuation [ϕ n (t + τ ) − ϕ n (t)] should roughly satisfy κ n |d Only once the functional form of ϕ is specified is it possible for independent constraints to be placed on the couplings, which we explore in the special case of ultralight dark matter in the following section. Constraints on ultralight dark matter One of the strongest cases for positing additional fundamental scalar fields is the problem of dark matter.Particle dark matter in the mass range 10 −22 eV ≲ m ϕ ≲ 1 eV is known as ultralight dark matter (ULDM), and in recent years significant efforts have been focused on detecting ULDM through apparent oscillations of fundamental constants.The upper bound m ϕ ≈ 1 eV occurs when the number density n of bosons in the reduced de Broglie volume (λ/2π) 3 satisfies n(λ/2π) 3 ≫ 1, resulting in a macroscopic phase-space occupation that exhibits Bose-Einstein condensation.Wavelengths λ ∼ O(kpc) span distances comparable to the smallest dwarf galaxy halos and imply a lower bound m ϕ ≈ 10 −22 eV [83].This bound also roughly coincides with the upper limit of dark matter being completely accounted for by m ϕ [84], which is a common assumption in studies seeking to exclude couplings at a given confidence level. The Standard Halo Model is assumed for the dark matter density and velocity profiles, where for coordinates centered on the galaxy v vir ∼ 10 −3 is the virial speed (in natural units) of dark matter with isotropic distribution ⟨⃗ v vir ⟩ = ⃗ 0. As a result of the solar system's motion through the dark matter halo at speeds comparable to v vir , an Earth-based laboratory experiences a dark matter wind with | ⃗ k| ≈ m ϕ v vir ≪ m ϕ when neglecting subdominant corrections [85].In the dark-matter rest frame, oscillations are controlled by the rest mass m ϕ = 2πf ϕ where f ϕ is the Compton frequency in natural units.Oscillations are coherent in time for τ c ∼ 4π/(m ϕ v 2 vir ) ≳ 10 6 T c ≫ T data where the oscillation timescale T c = 1/f ϕ greatly exceeds the experimental timescale T data .As the coherence length λ ∼ 2π/(m ϕ v vir ) is larger than solar-system scales for all f ϕ of interest, the scalar-field amplitude is approximately constant.Under these conditions, ULDM is described by a macroscopic, nearly constantamplitude waveform oscillating at the underlying particle Compton frequency, up to small velocity corrections ∼ O(v 2 vir ).Measurements from the cosmic microwave background indicate dark matter was present in the early universe and is strongly constrained to be stable on experimental timescales [86].The standard theoretical treatment of ϕ as dark matter assumes cosmological evolution in a flat Friedmann-Lemaître-Robertson-Walker universe where (2) follows with Γ = 0.The field has an amplitude ϕ 0 = 2 ⟨ρ ϕ ⟩/m ϕ , resulting from time-averaging the energy density ρ ϕ = ( φ2 + m 2 ϕ ϕ 2 )/2 in the nonrelativistic limit.As all measurements presented in this paper were taken at a single location, the dynamics of ϕ are described by the solution Typically it is assumed that the scalar field comprises the entirety of the dark-matter density inferred at the solar galactocentric radius R 0 ≃ 8 kpc, or the "local density" ρ DM (R 0 ) ≈ 0.3 GeV/cm 3 [87].This figure should be taken with caution and has a large influence on the constraints, as densities from solar-system objects including planetary ephemerides [88] and more recently asteroid data [89] constrain ρ DM ≲ (10 4 -10 6 ) × ρ DM (R 0 ).The immediate implications are on the sensitivity to the couplings in Eqs. ( 18) and ( 19), since for a generic coefficient (assuming the field saturates the total density) the sensitivity to d and at best linear (quadratic) constraints would scale downward by ∼ 10 3 (∼ 10 6 ).However, they could also be substantially weakened.Constraints would need to be reconsidered in the case of multicomponent dark matter.For example, interactions between the different components could lead to nonzero decay widths (i.e.Γ ̸ = 0), and the density contribution would be spread across the components ρ DM = i ⟨ρ ϕ i ⟩.The extracted limits would be weaker and scaled by ⟨ρ ϕ i ⟩ /ρ DM . Scalar couplings Previous experiments monitoring electronic and microwave frequencies have resulted in constraints on the scalar couplings d and d (n) g assuming dark matter [15,[21][22][23][24]74,[90][91][92][93].In the present work, constraints may be extracted by using the amplitude spectra A X (f ) = A r (f )/|∆K| for frequency bin f and relevant sensitivity coefficient ∆K from Fig. 5. Identifying f = f ϕ = m ϕ /2π for linear couplings and f = 2f ϕ = m ϕ /π for quadratic couplings and noting Eq. (17) gives where f ) 2 .Note that the effects of boosting from the dark matter rest frame to the laboratory frame, which introduces a broadening of the oscillation frequency f .As a result, the sum over distinct field modes for measurement times T ≪ τ c reduces the sensitivity to the linear n = 1 coefficients by a stochastic factor ≈ 3 [94]. 8The data encompasses frequencies 10 −6 Hz ≲ f ≲ 2 × 10 −2 Hz, corresponding to masses 4 × 10 −21 eV ≲ m ϕ ≲ 8 × 10 −17 eV. The results for linear and quadratic scalar couplings discussed in Sec. 2 are presented for the case of ULDM in Figs.6-9.Note that some works assume the slightly larger estimate ρ DM ≈ 0.4 GeV/cm 3 , which for purposes of comparison here amounts to a negligible difference.For all figures, the right axes compare κ n d (n) j = 1/Λ n j for a generic dimensionful scale Λ j .We choose the scale to be identified with Λ j , Λ ′ j for linear and quadratic couplings, respectively.For the coupling d γ in Fig. 6, a new exclusion region up to roughly an order of magnitude improvement over previously published work for 10 −20 eV ≲ m ϕ ≲ 10 −17 eV is observed.This improved sensitivity is mostly explained by the difference in K factors between, e.g., Rb/Cs where |∆K Rb/Cs α | ≈ 0.5 is roughly an order of magnitude smaller than |∆K Yb + /Sr α | ≈ 6.The extracted bounds also essentially surpass Eöt-Wash [95,96] and MI-CROSCOPE [97,98] equivalence-principle (EP) tests assuming a light dilaton.We note that recent experimental results [25], also using Yb + and Sr clocks, have claimed even tighter constraints than extracted here on the linear photon coupling d (1) γ in this mass range. For the quadratic coupling d γ , a similar trend with respect to clock studies [90,91] using Rb/Cs [21] and 164 Dy/ 162 Dy spectroscopy [74] persists and using EP-test results to extract bounds on quadratic couplings [99] shows greater sensitivity ≳ 4 × 10 −18 eV.We also include constraints from big bang nucleosynthesis (BBN) [91] which surpass the sensitivity of other experiments in the probed mass range. To the best of our knowledge, the only previously published clock-based studies to account for stochastic degradation factor ≈ 3 were those performed by the BACON collaboration with Al + /(Yb, Hg + ) and Yb/Sr clock comparisons [23], JILA using clock-cavity Sr/Si and H/Si comparisons [22] and NMJI using Yb/Cs clock comparisons [24].Note that EP tests do not rely on assumptions of the amplitude ϕ 0 and thus the contribution of the scalar field to the dark matter abundance.Similarly, though BBN constraints use ⟨ρ ϕ ⟩ = ρ DM , the field is non-oscillating with constant ϕ 0 for m ϕ ≪ 10 −16 eV so coherence considerations are irrelevant. Limits on linear and quadratic d g .The final scalar constraints are extracted on the parameter A from the scalar-Higgs interaction and are presented in Fig. 9.The simplest (n = 1) linear couplings have garnered theoretical attention since they can emerge from the technically natural operator L ϕH = −AϕH † H for Higgs doublet H [100].The sensitivity coefficient is where b ∼ 0.2 − 0.5 is a dimensionless factor in the Higgs-nucleon Yukawa coupling g hN N = bm N /v, where v ≈ 246 GeV is the Higgs vacuum expectation value and where the nucleon mass m N = (m p + m n )/2 ≈ 0.94 GeV.To easily compare with existing Rb/Cs limits, we choose b = 0.2 and use the relevant Sr/Cs values from Table 1 of Ref. [90].Using the Sr/Cs spectrum and noting κd H ↔ A/m 2 h for Higgs mass m h ≈ 125 GeV in (24) produces the limit.For the ratios considered here, the bulk of the sensitivity comes from (1 − b)K µ as the γ (bottom panel).The best fit from Yb + /Sr (red) along with expected noise level (black dashed line) and 95% confidence level (C.L.) (light red) lines are displayed.Comparisons with constraints on linear couplings include Rb/Cs clocks [21], combined Al + /(Yb, Hg + ), Yb/Sr clocks [23], Yb/Cs clocks [24], Sr/Si and H/Si cavity comparisons [22], Dy/Dy spectroscopy [74] and EP tests [95][96][97][98].Comparisons with constraints on quadratic couplings include clocks [90,91], EP tests [99] and BBN [91].electromagnetic portion is suppressed by an additional factor of α (from radiative corrections) and the sensitivity of quark contributions to m N , g N are suppressed relative to m e /m N by around an order of magnitude.This implies, e.g., optical-optical Yb + /Sr and microwavemicrowave Rb/Cs ratios have weaker sensitivity to A relative to optical-microwave ratios.It is worth noting that b = 0.5 gives almost no sensitivity to Rb/Cs.Accordingly, Sr/Cs comparisons have more robust potential for constraining A, however, as the existing Rb/Cs data set is longer, competitive limits with respect to fifth-force searches exist in the range 10 −24 ≲ m ϕ ≲ 10 −20 .Note again that the current Rb/Cs limits do not include the stochastic degradation factor ≈ 3, which would shift the displayed curve upwards accordingly.g (bottom panel).The best fit from Sr/Cs (green) along with expected noise level (black dashed line) and 95% C.L. (light green) lines are displayed, including comparisons with EP tests and Rb/Cs [21], Yb/Cs [24], and H/Si [22]. Pseudoscalar couplings Recently Kim and Perez [101] highlighted that atomic clocks can also provide complementary and competitive constraints on an axion-like field a coupled to gluons: where g s and f a are the strong coupling and axion decay constant, respectively.As a result, the square of the pion mass undergoes small oscillations quadratic in the field (bottom panel).The best fit from Sr/Cs (green) along with expected noise level (black dashed line) and 95% C.L. (light green) lines are displayed, including comparisons with EP tests and Rb/Cs [99], and BBN [90,91]. where m u , m d are the up and down quarks and θ = a/f a .The nucleon mass m N (θ) and the nucleon g-factor g N (θ) (and hence nuclear g-factor g(θ)) have an inherent dependence on m π (θ) as parametrized by chiral perturbation theory [61,[102][103][104].Similar to quadratic scalar couplings in the ultralight range, the oscillatory component of θ is given by where here ρ DM = 0.4 GeV/cm 3 , m a ̸ ∝ f −1 a in general and assuming a saturates the local density.The variation of a microwave transition frequency (13) may be expressed as variations in g(θ) and m p using Eqs.( 27) and (28), giving [86] δν Comparing with another microwave or optical standard and identifying m a = πf for signal frequency f from an amplitude spectrum yields the relation where m 15 ≡ m a /(10 −15 eV) and c r is a constant.In [101] microwave-microwave (Rb/Cs, c r ≈ 10 −1 ) and microwave-optical (H/Si, c r ≈ 1) comparisons are studied.One may also consider Sr/Cs, where due to the weak dependence of Sr on nuclear quantities |δr/r| ≈ δν Cs /ν Cs and c r ≈ 2 × 10 −2 , including effects of stochasticity.From Eq. ( 30), we plot the Sr/Cs limits along with Rb/Cs and H/Si from [101] as well as include the constraints from the neutron electric dipole moment (nEDM) in Fig. 10. Perhaps most compelling are the high-frequency range constraints achievable on the axion-like coupling in Fig. 10 (see blue dashed line).In order to measure and remove density-dependent effects, the cesium fountain interleaves regular measurements with samples recorded with much higher atom densities.For the data presented here, this leads to a Cs clock cycle time of 600 s, corresponding to a maximum (Nyquist) frequency of 833 µHz.However, the cesium fountain could run with lower cycle times, and if the cesium fountain [21], H/Si [22] and nEDM [105]. were operated without compensating density-dependent effects, cycle times of ≈ 5 s could be achieved, allowing Fourier frequencies up to ≈ 0.1 Hz to be probed, though at the expense of stability at longer times.This would yield constraints over an additional order of magnitude in frequency space outside of the large excluded nEDM region.This contrasts with the projections from future experimental proposals, demonstrating that existing atomic-clock capabilities can provide competitive constraints in axion physics.Taken together, subsequent studies based on future data campaigns are clearly motivated. Discussion and Conclusion In this work, we have presented a theoretical framework to describe the time variation of fundamental constants in a model-independent way.This approach demonstrates that a realistic model for the time variation of fundamental constants has many free parameters. Using data acquired from atomic clocks operating at optical ( 87 Sr, 171 Yb + ) and microwave ( 133 Cs) frequencies, we constrain the instability of fractional changes in α to be σ (∆α/α) (τ ) ≤ 2.3 × 10 −16 / τ /s for averaging times 60 s < τ < 30 000 s, and we constrain the instability of fractional changes in µ to be σ (∆µ/µ) (τ ) ≤ 1.6 × 10 −13 / τ /s for averaging times 600 s < τ < 80 000 s.The theoretical framework then allows us to place constraints on combinations of a new scalar field ϕ n (t) and the different interaction strengths d (n) j coupling that field to other particles: photons, electrons, quarks and gluons.These constraints are independent of the underlying physics and the functional form of ϕ n (t). As an example of a specific model, we studied ultralight dark matter couplings to matter and presented new constraints on low-dimension dilaton-like operators.The limits on d (1) γ from Yb + /Sr data exclude a new region of parameter space for masses 10 −20 eV ≲ m ϕ ≲ 10 −17 eV, as shown in Fig. 6.We also refer the reader to new experimental results [25] using Yb + and Sr clocks, which claim even tighter constraints on d (1) γ .In the future, the limits on most of the parameters presented in this paper could readily be extended into both higher and lower frequency regions.Higher frequency regions could be accessed by operating the clocks with shorter measurement cycles, and lower frequency regions could be accessed by recording longer data sets.Furthermore, there are clocks being developed that are more sensitive to variations of fundamental constants, such as certain highly-charged ion species [106] or molecular clocks [107].Frequency ratios between these new types of optical clock could place significantly lower bounds on many of the coupling strengths between new scalar fields and particles of the Standard Model [108]. One can disentangle the different parameters ϕ 0 and d j .By considering experiments sensitive to α, µ = m e /m p or α s , one could in principle measure ϕ 0 and some of the d j independently.Furthermore, in general there could be several scalar fields.Some could couple to photons others to gluons.To be very clear, couplings may not be universal. The mass of the proton m p is mainly sensitive to the time dependence in the QCD coupling constant.Remember that the QCD scale is given by Λ QCD = µ r exp(2π/(α s (µ r ))) 1/b 3 with b 3 = −7 in the Standard Model and where µ r is the energy scale at which α s is measured.Neglecting a possible change in the quark masses, the proton mass m p is proportional to Λ QCD .Using the renormalization group equation for α s we find where in the linear case, we have in the underdamped regime αs α s = − κd for the quadratic case. Figure 1 : Figure 1: Top and middle: Fractional frequency ratios of Sr/HM and Yb + /HM data counted by NPL's femtosecond frequency comb, with data averaged over 1 s intervals.Bottom:Fractional frequency ratio of Cs/HM data produced by NPL's cesium fountain, NPL-CsF2, with data pre-processed over 600 s intervals.Data were collected between 1 st -14 th July 2019 (MJD 58665 -58679). Figure 2 : Figure 2: Characterisation of each fractional frequency ratio's instability using the Modified Allan Deviation, plotted at octave intervals of averaging time. Figure 3 :Table 2 : Figure3: Instability estimates for fractional changes in α and µ, where σ X (τ ) ≡ σ (∆X/X) (τ ) = σ r (τ )/K r X on timescales over which the instability is dominated by the behavior of the atomic transition frequency. Figure 4 : Figure 4: Power spectral densities of fractional frequency ratios, estimated with the Lomb-Scargle periodogram.The levels of white frequency modulation (WFM) noise, h i 0 , are taken from the best fits to the σ r (τ ) curves presented in Fig. 2. Figure 5 : Figure 5: Fractional amplitude spectra for sinusoidal oscillations in α and µ.The solid horizontal lines indicate the estimated levels of white frequency modulation (WFM) noise of the spectra, and the broken horizontal lines represent the estimated false alarm levels at the equivalent of 1σ (p ≤ 0.32) and 5σ (p ≤ 3.5 × 10 −7 ) significance. in Fig.7and Fig.8, respectively.The linear constraints are competitive and similar in shape and magnitude with H/Si, Rb/Cs and Yb/Cs comparisons over the range of data, however the sensitivity of Rb/Cs extends up to two orders of magnitude below EP tests around m ϕ ≈ 10 −23 eV.Note that Rb/Cs has no sensitivity to d (n) me − d (n) g .Regarding the quadratic constraints, Sr/Cs data probes a new region for clocks in the d (2) me − d (2) g panel and displays roughly two orders of magnitude more sensitivity than EP tests at the low-mass range.Despite this, BBN still dwarfs the sensitivity by comparison and both EP tests and BBN encompass the range probed for d (2) Figure 9 : Figure 9: Constraints on Higgs coupling parameter A. The best fit from Sr/Cs (green) along with expected noise level (black dashed line) and 95% C.L. (light green) lines are displayed, including comparisons with Rb/Cs [90] and fifth-force searches [100].
2023-02-10T06:42:35.275Z
2023-02-09T00:00:00.000
{ "year": 2023, "sha1": "b0492eacc2b95d2fe3ab07347cef6d18d59e8411", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1367-2630/aceff6/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "4a2b409cd39177f64e379de0d82ad94212a0ff87", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
72445445
pes2o/s2orc
v3-fos-license
Are We Learning Enough Pathology in Medical School to Prepare Us for Postgraduate Training and Examinations ? Medical schools responded to the first publication of Tomorrow’s Doctors with an abbreviated syllabus and a reduction in didactic teaching hours. Prescribing errors, however, have increased, and there is a perception amongst clinicians that junior doctors know less about the pathological basis of disease. We asked junior doctors how useful they thought their undergraduate teaching in pathology had been in their postgraduate training. We had 70 questionnaire responses from junior doctors within a single deanery and found that although almost every doctor, n= 61 (96%), thought that pathology formed amajor component of their postgraduate exams, most, n = 47 (67%), thought that their undergraduate teaching left them unprepared for their postgraduate careers, and they had to learn basic principles, as they revised for postgraduate exams. Few used a pathology text for learning, most doctors, n = 64 (91%), relying on question and answer revision resources for exam preparation. Perhaps, as revision materials are used so widely, theymight be adapted for long-termdeep learning, alongside clinical work.This presents an opportunity for pathologists, deaneries, royal colleges, and publishing houses to work together in the preparation of quality written and online material readily accessible to junior doctors in their workplace. Introduction There have been two revisions since the original publication of the GMC's Tomorrow's Doctors in 1993 [1][2][3] when medical schools that revised their syllabus and curriculum to reduce the volume of facts medical students were required to learn and to reduce the amount of didactic teaching in favour of self-directed learning [4,5].Two subjects that have suffered are clinical pharmacology and pathology, and in subsequent revisions Tomorrow's Doctors have sought to redress this. In the wake of increasing prescribing errors [6], medical schools and some NHS Trusts seek to teach and test clinical pharmacology and prescribing [7], and a national prescribing exam is set to be launched in 2014 [8,9].What about pathology?Several authors lament that the reduction in taught courses in histopathology and chemical pathology has resulted in a generation of junior doctors who do not really understand what is wrong with their patients or how to interpret the results of investigations [4,5,10,11].Postgraduate training assumes a certain level of knowledge; however, membership exams rely on a grounding in pathology that they examine in detail.So how well do their undergraduate courses prepare junior doctors preparing for membership exams and life on the ward? We performed a study asking doctors about to sit their exams and doctors who had just attained their membership, how well their undergraduate pathology teaching prepared them for their clinical work, and what they used to prepare for the pathology component of their postgraduate exams. Study Methods A questionnaire study was undertaken between January 2011 and March 2011 where 70 consecutive trainees within one UK deanery (Oxford) were handed a single-page questionnaire to complete in the hospital where the author works and at regional teaching sessions.Each potential respondent was asked where they had completed their undergraduate medical training, and only graduates of UK medical schools were asked to complete a questionnaire, as we sought to evaluate undergraduate pathology teaching in the UK only. Each respondent approached gave verbal consent to complete the questionnaire and completed it themselves.Details of name, grade, and specialty were included, and the trainee sample included a range of different specialities including surgery, medicine, anaesthetics, paediatrics, obstetrics, and gynaecology. Results Results were collated from all completed questionnaires ( = 70). Figure 1 shows the specialities of trainees who completed the questionnaire.The results show that trainees in all specialities believe that pathology remains a significant component of the membership exams, = 61 (87%) (Figure 1).The majority, = 47 (67%), of trainees felt that their undergraduate courses had not prepared them for their membership exams, and that they were disadvantaged in having to learn pathology from first principles rather than build on the basics they hoped to know already.Several doctors claimed not to have had a distinct pathology taught course in medical school.Although most students, = 60 (86%), kept undergraduate notes they had, very few ( = 6, 9%) actually used these notes as a basis for preparation for their membership exams, as they did not cover specific topics in sufficient detail, and exam revision aids were perceived to be a more comprehensive source for exam preparation. We asked exactly what materials doctors preparing for their exams did use as an alternative or in addition to their undergraduate pathology notes.Most doctors, = 64 (91%), relied heavily on published question and answer revision material (both in written form and online) rather than pathology texts (Figure 2).were aware of the existence of the pathology e-LfH website, = 9 (13%).Only 2 trainees (3%) used this as a revision aid.Most doctors surveyed were unaware that this was readily accessible at work. Discussion That a sound knowledge of pathology is essential to clinical practice is in no doubt.The surgical specialties most certainly require this in every aspect of practice.This is reflected in the membership examination of the Royal College of Surgeons (MRCS), an entry requirement into formal surgical training, where pathology comprises one-third of the examination and is examined separately in the viva voce component and in the membership examination of the Royal College of Obstetrics and Gynaecology (MRCOG) where pathology comprises up to one-fifth of the clinical papers.Trainees in nonoperative specialties, however, whether interpreting results or discussing management decisions in multidisciplinary meetings, also need to learn and understand pathology and are at a huge disadvantage in routine clinical work if they are unable to appreciate the significance of pathological findings and the natural history of disease. Pathologists and clinicians have documented their experience of dealing with doctors who know much less pathology than junior doctors knew in the past [4,5,[8][9][10][11], but it is rare to hear from the junior doctors themselves.The perception amongst the trainees we sampled was that they did believe that pathology formed a significant part of their postgraduate learning.Just how helpful their undergraduate pathology learning to their postgraduate training depends, of course, on the nature of the undergraduate course and their own learning and retention of this material.Although we asked exactly where each respondent had trained, we did not correlate this with information about the exact nature of the course and accept that there might be considerable variation in the undergraduate teaching and learning of pathology across the UK.Nevertheless, the postgraduate syllabus for each specialty is standardized, and it is at this level that gaps in undergraduate knowledge require further study to meet national examination standards. One source of postgraduate learning is published revision material.Good exam revision resources, however, though widely available, are unlikely, on their own, to form a fundamental understanding of the pathological basis of disease.Effective undergraduate teaching and learning is key and sets the precedent for postgraduate self-study using local and college facilities integrated with clinical medicine.Where there might be gaps in undergraduate learning, postgraduate learning material should be comprehensive and cover pathology from first principles. Later versions of Tomorrow's Doctors [2,3] have sought to rebalance the emphasis on the core curriculum and increase the amount and details of pathology taught in medical school, but many pathology departments experienced crippling cuts in staffing and funding about 15 years ago, and meeting the increasing need for teaching may present a challenge. There is a dependence on question and answer revision aids.Over 90% of trainees use these, and while superficial learning or cramming just before exams is effective to some extent in terms of exam success, it is unlikely that this would form the basis of integrated clinical learning.Perhaps revision resources may be adapted for more long-term use throughout the junior doctor years to provide integrated learning alongside clinical experience?This would require collaboration between deaneries, the royal colleges, and major publishing houses. Take home message: our study indicates that learning pathology remains important in training in all specialties and postgraduate exams.Doctors need to be able to rely on their undergraduate learning and use quality resources that are easily available at work, and that they can integrate with their clinical activity.Their dependence on revision resources begs the question that perhaps revision resources should be developed in collaboration with pathologists, the specialty colleges, and experienced teachers. Figure 1 e-Path is a web-based educational resource to support medical and healthcare science trainees.It is delivered in partnership by the Royal College of Pathologists and e-Learning for Healthcare (e-LfH) [12].Few junior doctors
2019-03-10T13:14:16.582Z
2013-02-27T00:00:00.000
{ "year": 2013, "sha1": "9ac3b0c45ddbdfca488f168d21afe8632514cdea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2013/165691.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9ac3b0c45ddbdfca488f168d21afe8632514cdea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
42144802
pes2o/s2orc
v3-fos-license
The Goos-Hänchen effect for surface plasmon polaritons : By means of an impedance boundary condition and numerical solution of integral equations for the scattering amplitudes to which its use gives rise, we study as a function of its angle of incidence the reflection of a surface plasmon polariton beam propagating on a metal surface whose dielectric function is ε 1 ( ω ) when it is incident on a planar interface with a coplanar metal surface whose dielectric function is ε 2 ( ω ) . When the surface of incidence is optically more dense than the surface of scattering, i.e. when | ε 2 ( ω ) | (cid:3) | ε 1 ( ω ) | , the reflected beam undergoes a lateral displacement whose magnitude is several times the wavelength of the incident beam. This displacement is the surface plasmon polariton analogue of the Goos-H¨anchen effect. Since this displacement is sensitive to the dielectric properties of the surface, this effect can be exploited to sense modifications of the dielectric environment of a metal surface, e.g. due to adsorption of atomic or molecular layers on it. Abstract: By means of an impedance boundary condition and numerical solution of integral equations for the scattering amplitudes to which its use gives rise, we study as a function of its angle of incidence the reflection of a surface plasmon polariton beam propagating on a metal surface whose dielectric function is ε 1 (ω) when it is incident on a planar interface with a coplanar metal surface whose dielectric function is ε 2 (ω). When the surface of incidence is optically more dense than the surface of scattering, i.e. when |ε 2 (ω)| |ε 1 (ω)|, the reflected beam undergoes a lateral displacement whose magnitude is several times the wavelength of the incident beam. This displacement is the surface plasmon polariton analogue of the Goos-Hänchen effect. Since this displacement is sensitive to the dielectric properties of the surface, this effect can be exploited to sense modifications of the dielectric environment of a metal surface, e.g. due to adsorption of atomic or molecular layers on it. When an electromagnetic beam of finite cross section is incident from an optically more dense medium on its planar interface with an optically less dense medium, and the polar angle of incidence is greater than the critical angle for total internal reflection, the reflected beam undergoes a lateral displacement along the interface, as if it has been reflected from a plane in the optically less dense medium parallel to the physical interface. This effect was first observed by Goos and Hänchen [1], who measured a displacement D = 1.495 λ ± 0.261 λ for a beam reflected from a silver coated glass-air interface at an angle of incidence θ 0 = 44.1 • . This Goos-Hänchen effect was explained soon after by Artmann [2], who related it to the phase ϕ(θ 0 ) of the amplitude of the reflected beam by In recent years analogues of optical effects originally associated with volume electromagnetic waves have begun to be studied both theoretically and experimentally in the context of surface plasmon polaritons (SPP). These include, e.g. negative refraction [3,4], the Talbot effect [5,6], lasing [7], cloaking [8,9,10,11] and Young's double-slit experiment [12]. The interest in such effects is due to a desire to discover new properties of these surface electromagnetic waves and to the possibility of basing novel nanoscale devices on them. With these motivations, in this paper we study the analogue of the Goos-Hänchen effect for SPP by investigating the system sketched in Fig. 1 in which a SPP propagating on the surface of a metal whose dielectric function is ε 1 (ω) is incident on a planar interface with an optically less dense metal whose dielectric function is ε 2 (ω) (|ε 2 (ω)| |ε 1 (ω)|). We consider the two cases in which the second metal is either infinitely long (single interface) or of finite length L (double interface). The electromagnetic field of the SPP is determined by use of an impedance boundary (3) where for the single and double interface, respectively. Due to the translational invariance of the system in the x 2 -direction A p,s (q ) have the general form Substituting Eq. (6) into Eq. (2) leads to a pair of effective one-dimensional integral equations in which we definep = (p 1 , k 2 ) andq = (q 1 , k 2 ). Equations (7) are solved numerically using the Nystrom method [14]. The infinite range of integration is replaced by a finite interval The resulting integrals over q 1 were converted to sums using a N-point extended midpoint method. p 1 was given the values of the abscissas used in the evaluation of the integrals and a square 2N × 2N supermatrix equation with N = 18001 for a p,s (p 1 ) is solved by a standard linear equation solver. The convergence of the solution was monitored by increasing q ∞ and N systematically until the solution did not change upon further increases of these parameters. A lateral displacement of the incident SPP beam is identifiable in the far field region by the intensity distribution of the scattered electromagnetic field of the propagating p-polarized SPP mode with wave number k . For an incident plane wave, the scattered field is given by whereê p (q ) = c ω iq β 0 (q ) −x 3 q is the polarization vector for p-polarized SPP. The contribution to this field in the region x 1 < 0 from the reflected surface plasmon polariton is given by the residue of the integrand at the simple pole it has at q 1 = −k 1 (ω) = − cos(θ )k (ω). With the assumption that ε 1 (ω) has an infinitesimal positive imaginary part, this pole lies in the lower half of the complex q 1 plane. It can be shown that a p (q 1 ) has no pole in this region. On evaluating the residue at this pole we obtain for the electric field of the reflected SPP in the region where r(−k 1 ) = ω c κ 1 is the reflection amplitude. In Fig. 2 we present a plot of R as a function of the angle of incidence when a SPP in the form of a plane wave whose wavelength is λ = 632.8 nm, propagating on a gold surface with ε 1 (ω) = −11.8 at the corresponding frequency, is incident on its planar interface with aluminum (ε 2 (ω) = −64.07). Since the mean free paths of SPP on these two surfaces are L 1 = 7 μm and L 2 = 30 μm, we expect the effect of ohmic losses on the our results obtained for real-valued ε i (ω) to be small. The angle of incidence is θ = 78 • , and the 1/e half width of the beam is w = 20c/ω. The critical angle for total internal reflection, given by sin θ 1 2 , has the value θ c = 75.4 • in this case. Note that preliminary, non-converged results were shown unanalyzed as work in progress in Ref. [15]. It is seen from this figure that R is small (∼10 −4 ) for all angles smaller than θ c , and equal to unity for angles greater than θ c . R is not a monotonically increasing function of θ , but has a pronounced minimum at the angle of incidence θ ≈ 45 • . The occurrence of this dip has been explained for a somewhat different SPP scattering problem as the Brewster effect for the incident SPP [16]. The shift of the position of the minimum in Fig. 2 from θ = 45 • is due to a small imaginary part added to ε 1 (ω). The phase shift ϕ(θ ) at the interface is close to π for angles smaller than θ = 45 • , and jumps to nearly 2π at this angle. In the rather narrow interval [θ c , 90 • ] the phase decreases continuously from 2π to π. Because of the derivative of ϕ(θ ) in Artmann's result one can therefore expect a significant lateral displacement of the reflected beam. To observe the Goos-Hänchen effect we need the intensity distribution of the field of the reflected SPP when the incident SPP has the form of a beam instead of a plane wave. Such a field is represented by a superposition of plane waves weighted by a Gaussian function of θ with 1/e half width 2/[k (ω)w], centered at θ = θ 0 , and normalized to unity, which yields a Gaussian beam of 1/e half width w whose angle of incidence is θ 0 . In Fig. 3 we present a color-level plot of the intensity distribution of the incident and reflected SPP beams for the system assumed in obtaining the results plotted in Fig. 2. The positions of the maxima of both beams at the interface (x 1 = 0) are marked with a dashed line, showing a displacement of D = 60.3 c/ω = 9.6λ . In Fig. 4, we compare the results for D as a function of the angle of incidence of the beam θ 0 for different widths of the beam. For a broad beam (w = 200 c/ω, solid line) we note the existence of a pronounced negative displacement for an angle of incidence close to 45 • arising from the jump in the phase shift from π to 2π as in Fig. 2. However, since the reflectivity R is very small (about 10 −5 ) at this angle, it might be difficult to observe this negative displacement experimentally. At θ c , D increases substantially, and the pronounced peak can be related to the maximum slope of ϕ(θ ) in Fig. 2. The behavior for θ 0 → 90 • is in agreement with 1/ cos(θ 0 ) dependence in Artmann's formula. The faint negative D close to θ c is a result of the use of a small value for η = 0.01 in f I (Q 1 ). In the case of beams with smaller half widths, Artmann's formula no longer holds, resulting in noticeable differences in the dependence of D on the angle of incidence: the smaller the width, the fewer the structures in D(θ 0 ), e.g. as in Fig. 4 for beams with w = 30 c/ω (dashed line) or w = 10 c/ω (dotted line). For instance, the negative displacement at θ 0 = 45 • becomes less pronounced and the feature in the curve smears out. Similar observations can be made for the structures close to the critical angle for total internal reflection. In particular, D remains finite when θ 0 → 90 • . At a double-interface with L = 20 c/ω the phase of a reflected SPP plane wave (see inset of angles, which is due to multiple scattering at the two interfaces leading to destructive or constructive interference at different angles as in a Fabry-Pérot interferometer. These oscillations also appear in D(θ 0 ) shown in Fig. 5 and decay with larger L. The absolute values for the lateral displacements of SPP beams of up to 25λ are about one order of magnitude larger than the corresponding results for volume waves [1]. This can be rationalized by the larger critical angle and the concomitantly steeper decrease of the phase in the interval [θ c , 90 • ] in the case of SPP. The values are also sensitive to changes in the dielectric functions. Figure 6 shows the change in D upon variation of either ε 1 (ω) or ε 2 (ω) while the other one is fixed. This results were obtaind by the use of Artmann's formula (1). In particular, the strong dependence of the calculated lateral displacement on the value of ε 1 (ω), i.e. the dielectric function of the gold surface, indicates that small modifications of the latter may be resolved. Adsorption of molecules changes the dielectric environment of surfaces. Experimental measurements of the Goos-Hänchen effect for SPP for θ 0 > θ c , i.e., at grazing incidence, depending on molecular coverage may prove useful in sensing this change and thereby allowing drawing conclusions on adsorption or desorption processes, complementing techniques such as surface plasmon resonance spectroscopy. In summary, we have demonstrated in this paper the existence of the analogue of the Goos-Hänchen effect for SPPs at a gold-aluminum interface. Due to the large critical angle of θ c 75 • for total internal reflection of the SPP, lateral displacements of several times the wavelength of the incident beam occur. The sensitivity of the displacement to changes of the surface optical properties may be exploited to measure, for instance, the modification of the dielectric environment of the metal upon molecular adsorption.
2018-04-03T05:13:12.381Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "3ccf2e078ba7c7329749f8db016a611687c4b894", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.19.015483", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c7b60882f2eaaa8ff8c523b85b0e0e1df4fb7001", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
11824324
pes2o/s2orc
v3-fos-license
Pigment cell movement is not required for generation of Turing patterns in zebrafish skin The zebrafish is a model organism for pattern formation in vertebrates. Understanding what drives the formation of its coloured skin motifs could reveal pivotal to comprehend the mechanisms behind morphogenesis. The motifs look and behave like reaction–diffusion Turing patterns, but the nature of the underlying physico-chemical processes is very different, and the origin of the patterns is still unclear. Here we propose a minimal model for such pattern formation based on a regulatory mechanism deduced from experimental observations. This model is able to produce patterns with intrinsic wavelength, closely resembling the experimental ones. We mathematically prove that their origin is a Turing bifurcation occurring despite the absence of cell motion, through an effect that we call differential growth. This mechanism is qualitatively different from the reaction–diffusion originally proposed by Turing, although they both generate the short-range activation and the long-range inhibition required to form Turing patterns. I n recent years, developmental biology experienced a renewed interest in one of its most fascinating and long-debated phenomena: morphogenesis 1 . The emblematic question of morphogenesis is how the egg cell, which is essentially a round supramolecular aggregate, can spontaneously break its original symmetry to produce the very complex shapes of the adult body, in such a diverse yet precise and reproducible way. A hypothesis for such a mechanism was proposed by Alan Turing in 1952 in his seminal paper 2 . Turing shows with a theoretical model that a chemical reaction coupled with Fickian diffusion can give rise to a spontaneous symmetry-breaking phenomenon, in which an initial state having a uniform distribution of chemicals is converted into a regular pattern of concentrations. He formulated his ideas in the framework of the reaction-diffusion (RD) formalism, with a mechanism that can be shortly explained as follows. A chemical species X locally promotes its own production because it is part of an autocatalytic reaction. This chemical feedback results in a shortrange activation of X. The reaction also creates a species Y that promotes the consumption of X. If Y diffuses more rapidly than X, this second chemical feedback leads to a long-range inhibition. The combination of these two effects induces a non-trivial spatial distribution of the concentrations, which generates the patterns. Since this pioneering work, the literature has referred to such mechanism and to the corresponding stationary patterns, respectively, as Turing instability and Turing patterns. Turing's hypothesis is fascinating because it reduces the enormous complexity of the original biological problem to a relatively 'simple' chemical explanation. Thanks to this simplicity, chemical Turing patterns have been extensively investigated for decades, both theoretically 3 and experimentally 4,5 . The basic thermodynamical conditions for their emergence has been clarified by Prigogine et al. 6,7 . It is now well known that Turing patterns are a typical example of dissipative structures. These structures are a class of spatiotemporal organizations that can be obtained only far from equilibrium, where a system can continuously dissipate entropy in its environment, which compensates the entropy produced by the internal irreversible processes and keeps the system in an organized state. From a mathematical perspective, a Turing mechanism is a mechanism that generates a Turing bifurcation (the 'Turing bifurcation' is more often called 'Turing instability' in the literature. To avoid any confusion, we use the term 'bifurcation' when referring to the mathematical condition, and 'instability' in a more general fashion). A Turing bifurcation can occur in the system of partial differential equations (PDEs) ruling the spatiotemporal evolution of a set of dynamical variables In these equations, f stands for the local evolution laws, which are usually nonlinear and depend on a set of control parameters l. The second term accounts for the diffusion of the different variables based on the matrix of diffusion coefficients D(x;l). The bifurcation takes place when one of the eigenvalues o(k) of the corresponding linear problem is real and crosses zero for a unique and nonzero wavenumber k ¼ k T . This condition, which is controlled by the value of the parameters, implies that the dominant unstable mode of the bifurcating solution has an intrinsic wavelength 2p/k T . To understand the detailed mechanism underlying pattern formation in vertebrates, the zebrafish Danio rerio has been proposed more than 10 years ago as a model organism for experimental investigations 8 . The reasons for this choice are, on the one side, the ease and rapidity of breeding and genetically studying large colonies of these small animals, and on the other side, the large variety of coloured skin patterns that can be observed in different mutations within the Danio genus. These patterns range from stripes to spots of different sizes, and strikingly resemble the family of Turing patterns which are produced by classical RD models 9 . This analogy further extends to the dynamical response that both zebrafish skin patterns and theoretical Turing patterns have with respect to perturbations 10 . These findings might suggest that a RD mechanism underlies the formation of the zebrafish's skin patterns, but there are several points against this hypothesis. The first is that the pattern is not formed by a continuous concentration field of pigment molecules diffusing across the fish skin, but by a discrete assembly of coloured cells-melanophores (black), xantophores (yellow) and iridophores (light blue)-whose motion is unlikely to follow a Fickian dynamics. The second is that RD systems are not the only ones that can create patterns with an intrinsic wavelength in biology, as evidenced by Höfer and Maini 11 for the angelfish 12 . Moreover, the size of the pattern is very small, with a characteristic wavelength of only 10-20 cells (around 1B2 mm). Such numbers endanger the straightforward application of spatially continuous models. So, clearly we are not in the presence of a classical RD system, yet the analogies with RD models cannot be disregarded as mere coincidences. To gain more insight on the underlying mechanisms, Kondo et al. performed controlled experiments to identify the mutual interactions between xantophores and melanophores. These studies unveiled a regulatory network in which the two types of cells inhibit each other at short range, while xantophores activate the emergence of melanophores at long distances 13 . Remarkably, the proposed mechanism does not seem to require diffusion nor any kind of cell motion. This hypothesis has been recently consolidated by experimental investigations by Nüsslein-Volhard et al. These authors showed that the Kondo mechanism is a subset of a more complex regulatory scheme involving iridophores 14 . More importantly, the authors also stress that skin patterns emerge despite the absence of extensive cell movement 15 . All these studies suggest that patterns can emerge in the case of immobile units. To test whether this hypothesis is correct, we derive and analyse in this work a mathematical model for immobile cells based on the observed regulatory network. We show that this model can undergo a Turing bifurcation and reproduce the experimental patterns. We provide in this way a proof of principle for the idea that Turing patterns can appear in systems made of immobile agents. Results Rationale for the choice of model. As mentioned above, the investigations by Kondo et al. led to the conclusion that melanophores and xantophores regulate each other's growth. Nüsslein-Volhard et al. arrived at the conclusion that iridophores also play a role in the emergence of the skin patterns. They observed that patterns can be seen in fish mutants having at least two of the three types of cells. They thus proposed that on top of the mechanism put forth by Kondo, interactions also exist between melanophores and iridophores, which are qualitatively the same as the ones between melanophores and xantophores. Since xantophores and iridophores also regulate each other's growth, the resulting system actually consists in two coupled Kondo-like mechanisms. Our objective is to assess with a theoretical model whether immobile cells presenting short-range activation and long-range inhibition can lead, generally speaking, to patterns that are similar to those observed in the experiments. We also want to understand the mechanism by which such patterns would emerge. Despite the tremendous amount of experimental work, such a model does not exist yet. Even the complicate multiscale model proposed by Nakamasu et al. in ref. 13 implicitly relies on diffusion. Consequently, we devise a scheme that is solely based on the interactions between xantophores and melanophores. This choice is also justified by recent studies 16 showing that iridophores are indeed necessary to develop patterns in the trunk of the fish, but that melanophores and xantophores alone are enough to produce the fin patterning. Since the pattern in the trunk is contiguous with the one in the fins, and since the two share the same geometrical characteristics, it is reasonable to think that they should also share the same core mechanism. In other words, melanophores and xantophores are central to pattern formation, with iridophores playing mainly an assistance role in the trunk. A classical route to model morphogen gradients is to experimentally individuate what kind of transport mechanism the morphogens are subject to, and write a set of PDEs for their concentration field variables accordingly 17 . In the case of diffusive motion, one obtains a set of equations of the form (1). This implies the presence of morphogens undergoing a simple Brownian motion 18 , which is not the case here. To cope with more complex situations, different approaches exist in the literature, involving, for example, integro-differential diffusion 19,20 or statistical mechanics modelling 21 . In the present case, the patterns are formed by a discrete assembly of coloured cells interacting with each other, and we have experimental information on the network of such cell-to-cell interactions: it seems therefore a natural choice to describe the fish skin at its cellular level, as a discrete system. This will allow us to translate the cellular interactions into a set of probabilistic processes, from which average evolution equations can be derived. This approach has been successfully applied at the level of chemical reactions taking place on low-dimensional supports 22 . Its generality makes it suitable as well for the kind of cellular model we will use. Model derivation. To simplify the mathematical setting, we describe the fish hypodermis as a regular lattice, whose nodes can either be occupied by a xantophore, by a melanophore or by another type of cell. We thus exclude the possibility of overlapping between the two types of chromatophores. This approximation is supported by the experimental evidence that melanophores and xantophores rest on separate layers of iridophores 23 . The distance a between two nodes is chosen to be the average diameter of a chromatophore. The node at position i in the lattice is described by three boolean variables, X i for the xantophores, M i for the melanophores and S i for a node without chromatophore, which take the value 1 if the node is occupied by the corresponding species, and 0 otherwise. Consequently, one has X i þ M i þ S i ¼ 1 everywhere and at all times. The melanophores can undergo different types of transformations and interactions. We choose to represent these different events as stochastic processes, each having an intrinsic probability per unit time. The change in the cellular nature of each node is thus accounted for by a modification of the local variables, induced by each of the processes. We will now identify these events, and see how they translate in the framework of the probabilistic description that we propose. The precursors of the chromatophores develop in the neural crest of the fish, and then migrate towards the skin through essentially two pathways: a ventromedial pathway between the neural tube and somites, and a dorsolateral pathway between the somites and the epidermis 8,24 . It is also known that the two pathways are not equally selected by neural crest cells, especially in the early stage of embryonic development, and that this may cause the establishment of a pre-pattern on the fish skin, which helps to determine the fine details and the orientation of the final pattern. Singh et al. 25 showed that the development of the prepattern and the final pattern in the growing fish is strongly influenced by the presence of iridophores alongside melanophores and xantophores. However, we will not include these details in our description. Our objective is to develop a two-species model that can assess whether the regulation scheme that is at the heart of the qualitative mechanism proposed by Kondo is able to generate patterns with an intrinsic wavelength and geometry. We will thus simply consider that, thanks to the above two pathways, chromatophores can randomly appear at any position in the hypodermis that is not already occupied by a chromatophore. We call this process the 'birth' of either a xantophore or a melanophore. In symbolic notation, the birth processes read where b X and b M are the rate constants characterizing the probability for a xantophore or a melanophore to appear, per unit time, at a node i. We denote the nature of the cell with straight letters corresponding to the boolean occupation variables introduced earlier. Chromatophores can also naturally die because of ageing processes. When a chromatophore dies it is rapidly destroyed and removed from the hypodermis, leaving room for a new chromatophore. We write these two natural death processes as: where d X and d M are the corresponding rate constants. Experiments performed in vivo in the early stage of pattern development show that xantophores and melanophores mutually inhibit each other's birth when they are close 13 . More precisely, the experiments show both an increase in the proliferation of one type of cell when the first neighbouring cells of the other type are ablated, and an increased death rate of one type of cell when it is surrounded by cells of the other type. These observations suggest that such short-range interaction is mediated either by direct contact between the two cells 26 , or by local competition for nutrients. In either case, it is safe to assume that the range of this mutual inhibition extends only to first neighbours. We will consequently cast the overall effect of this competition into an increased death probability of a xantophore (melanophore) when a melanophore (xantophore) is close-by: The rate constants s M and s X are assumed to be larger than their natural counterparts d X and d M . Chromatophores can influence each other's growth also when they are separated by a few cells 13 . We will refer to this kind of feedback as long-range interactions. These experiments have been performed on striped skin patterns, by selectively ablating cells belonging to a certain stripe while monitoring the birth or death rates of a group of cells in a neighbouring stripe. Therefore, we infer that such long-range interactions extend over a characteristic distance, which we call h. It would appear from the experiments that h is typically of the order of 10 cells, which corresponds to one half of the wavelength of the stripes. We would like to stress that at this stage, we do not relate h a priori to this wavelength. This parameter simply acknowledges the fact that there must be a long-range interaction between a cell at position i and another one at position i±h. If the model is representative of the experiments, we should find a posteriori that the wavelength of the pattern is approximately equal to 2 h. Three types of long-range interactions have been experimentally observed: (a) the birth of new melanophores is promoted by the presence of xantophores at a distance h; (b) the survival of already existing melanophores is enhanced by the presence of xantophores at a distance h; (c) the birth of new melanophores is inhibited by the presence of already existing melanophores at a distance h. Our investigations of different theoretical models revealed that the process (a) is mandatory to obtain patterns with a wavelength that is controlled by h. Processes (b) and (c) bring small modifications to the dynamics but do not affect the patterns qualitatively. Therefore, in accordance with our aim of providing a minimal model, we will take into account only the increased birth rate of melanophores due to the presence of xantophores at long distance: Although the study by Nakamasu et al. 13 shows that a feedback of the type (a) must exist at a distance h, it does not say anything about its physico-chemical nature. A recent study by Hamada et al. strongly suggests that feedback (b) is mediated by a Delta-Notch signalling occurring at the tip of a long-range projection extended by melanophores, over a distance of approximately one stripe 27 . A similar interaction might occur between already emerged xantophores and melanophores that are below the hypodermis, which would then cause feedback (a), but there is no experimental study so far that confirms or rules out this hypothesis. Such details are, however, not needed in the simple, phenomenological description that we put forward for the long-range interactions. Finally, we want to discuss in more detail the validity of our hypothesis on the absence of cell motion. Experiments show that melanophores perform some motion across the fish skin 28 . More recently, a complex motion of melanophores and xantophores was observed in vitro and was seen to be triggered by their mutual interaction 29 . In both cases, melanophores tend to get away from the xantophores. Nakamasu et al. 13 also report that some melanophores migrate out of the monitored region both in the test and in the control experiments. Even though we are aware of a certain degree of mobility of the melanophores, we will consider cells to remain immobile in the present study. This hypothesis can be justified as follows. First, there is no systematic study (to the best of our knowledge) that proves that xantophores are capable of appreciable motion in vivo. In fact, the most recent experimental studies point towards a total absence of large-scale cell movement 15 . Second, there is no evidence that this motion is necessary for pattern formation: the fact that there is a movement of cells does not mean that this motion is responsible for the pattern formation itself. Moreover, the Kondo mechanism implies that pattern formation does not require any sort of transport. As our main aim here is to test the validity of this hypothesis, we will neglect any type of cell motion whatsoever. The approach we use could, however, easily be extended to include such effects. Evolution equations. The above processes form a set of events, each taking place with a given probability. To investigate the dynamics generated by these processes, one can either perform stochastic (kinetic Monte Carlo (MC)) simulations or analyse the evolution equations for the averages of the boolean variables X i , M i and S i over an ensemble of realizations. These equations are obtained from the master equation ruling the evolution of the underlying probability distribution. Consider as an illustration the case of a one-dimensional system. Including all the above events, one obtains: where the brackets stand for ensemble averages. The above equations formally apply to one-dimensional lattices, but can easily be extended to two-dimensional cases. These kinetic laws represent the evolution of the probability for each site i to be either occupied by a xantophore or by a melanophore. Note that we do not need an explicit evolution law for hS i i, because of the conservation rule mentioned earlier. The terms of the form hA i , B j i represent the joint probability of having a particle A at position i and a particle B at position j. These evolution equations can be transformed into relatively simple PDEs in an appropriate limit. First, we use the mean-field hypothesis, which assumes that the average composition of the ith node does not depend on the composition of the jth node, so that hA i , B j iChA i ihB j i. Second, we switch to continuous spatial coordinates r ¼ ia, in which a is the typical size of a cell. We will denote the corresponding scalar field of average occupation numbers x(r) ¼ hX i i and m(r) ¼ hM i i, respectively. Finally, we assume that the variations of these newly introduced variables are smooth enough in space so that Keeping terms up to the second order in a, which acts as an intrinsically small parameter, we obtain the following set of PDEs @x @t in which we do not explicitly write the spatiotemporal dependences for simplicity. The short-range and long-range interactions translate, in this continuous limit, into crossdiffusion-like terms. These contributions should, however, not be interpreted as being due to cross-diffusion, as their physical origin is completely different: they do not originate from molecular or cell motion, but are instead due to the nonlocal character of the biological interactions between the cells. We now present the results of stochastic MC simulations of the model, and show how they can be interpreted analytically in terms of the above continuous limit. Stochastic simulations. We want to assess whether the proposed model is able to generate stationary structures similar to those observed in the experiments. To this end, we perform simulations of the discrete scheme (2)-(8) on two-dimensional square lattices with relevant values of the parameters. The experiments show that the short-range inhibition causes cells to die much faster than they would do otherwise. Similarly, the birth of new melanophores is much more rapid when it is promoted by long-range activation by xantophores. For these reasons, we approximate d X , d M and b M to zero to obtain a qualitative representation of the behaviour of the system during timescales that are consistent with those of the experiments. In qualitative accordance with the observations, we also assume that melanophores and xantophores inhibit each other with essentially the same strength, so that s X ¼ s M ¼ s. Simulations with such sets of parameters show that the model is indeed able to qualitatively reproduce the experimental patterns (see, for example, Fig. 1). These patterns are stationary and come in different geometries and wavelengths, which are controlled by the different constants of the model. In particular, if the values of all the other control parameters are fixed, l X controls the geometry of the pattern (see Fig. 2). For low values of l X , the skin would be covered only by xantophores, and would be entirely yellow. Increasing this parameter first turns this homogeneous state into a black-dotted yellow skin, then into a striped pattern and, eventually, into a predominantly black skin with yellow spots. We notice that l X barely affects the wavelength of the patterns. It is actually h that controls the intrinsic size of the pattern. Remarkably, the wavelength of the pattern turns out to be roughly equal to 2 h, as reported in the experimental investigations. The above results reveal a surprisingly simple selection mechanism for the patterns, which are essentially controlled by the parameters characterizing the long-range interaction: its spatial extent h and its 'strength' l X . This behaviour can be understood as follows. The mutual short-range competition processes (6)-(7) act as a source of segregation, since they tend to destroy close-by pairs of different chromatophores. Should the birth of chromatophores be controlled only by the random events (2) and (3), one would observe a separation into X and M domains of indefinite size. The only reason why a finite wavelength emerges in our simulations is because the birth of new melanophores is controlled by the long-range interaction. The parameter h sets the boundary of a 'forbidden' region surrounding a xantophore, in which melanophores cannot grow. If l X is small, the rate of creation of melanophores is very low and xantophores win the competition everywhere. As its value increases, melanophores start to fill more and more the space outside the forbidden regions, which first leads to the formation of black dots, then black stripes and eventually to the essentially black skin with yellow dots. It is worth noting that the model generates stationary patterns with a wavelength of only a few cells. In many instances, the stochasticity of the different processes in RD systems leads to levels of noise such that pattern formation is compromised at small scales. In this respect, the nonlocal interactions between immobile agents seems to give rise to structures that are especially robust. This is in agreement with the experiments, which show that patterns can be generated in zebrafish whose half wavelength is of the order of 10 cells. In addition, we also assessed qualitatively the effect of having a pre-pattern on the development of the striped pattern. It is known that the initial stage of the pattern in the trunk of the wildtype zebrafish consists of a single band of iridophores, which inhibits the growth of melanophores on top of them 15 . We simulated this by defining a region (the shaded horizontal bands in Fig. 1) where melanophores cannot grow, or in other words where processes (3) and (8) are completely inhibited. We observed that the presence of such a pre-pattern leads to a selection of the final orientation of the stripes, which often tend to align with the band of iridophores. However, the pre-pattern does not seem to affect the intrinsic wavelength of the pattern itself. More particularly, the final orientation of the stripes in the absence of initial iridophores (Fig. 1a) is such that the system accommodates optimally an integer number of stripes with a wavelength fixed by the choice of parameter values. The presence of a horizontal band (Fig. 1b,c) where the growth of melanophores is inhibited acts as a preferential region for the growth of a xantophores stripe, which in turn directs the evolution of the pattern horizontally. This effect is independent of the size of the pre-pattern: even a narrow band of iridophores can trigger the above mentioned orientation process. Analytical results. Simulations reveal the existence of stationary patterns with an intrinsic wavelength, but are these structures the consequence of a Turing bifurcation? To answer that fundamental question, one can perform a linear stability analysis of the equations (14) and (15) analysis shows that the system can have up to three homogeneous steady states, one of which can undergo a Turing bifurcation. The explicit forms of these solutions are, however, complicated functions of the control parameters. To highlight the origin of the bifurcation, we will rather focus on the simple, yet representative limit of the model with which we performed the simulations Under these approximations, the system still has three homogeneous steady states The first and the third states represent a complete saturation of the skin by xantophores or by melanophores, respectively. They appear only because we neglected the natural death of both species. On top of being unrealistic for long times, it can be proven that they cannot undergo a Turing bifurcation. The second state corresponds to a skin containing both types of chromatophores. It has acceptable values only if l X Zb X , in other words only if the xantophore-induced birth of melanophores is as efficient as (or more efficient than) the natural birth of xantophores. This latter state can undergo a Turing bifurcation, which can be expressed in terms of a critical long-range interaction The corresponding curve is depicted in Fig. 2. Patterns will appear for interaction distances that are larger or equal to this value. Moreover, at the bifurcation point, the most unstable mode of wavenumber k T corresponds to a well-defined wavelength (see also Fig. 3), given by These expressions fit well with the outcomes of the simulations. In particular, the analysis predicts that at the criticality, lE2h T for a wide range of parameters. We can thus conclude that the patterns observed in the simulations of the model are indeed the consequence of a Turing bifurcation, despite the absence of motion of any kind. This is made possible by the presence of the effective cross-diffusion-like terms that we mentioned earlier, which are the mathematical translation of the existence of nonlocal interactions. Discussion On the basis of the above results, we can answer some of the questions that we outlined at the beginning. We showed that the combination of short-range and longrange feedbacks in the Kondo mechanism can give rise to dynamical models undergoing a Turing bifurcation in the absence of cellular transport. The bifurcation occurs because the combination of the different feedbacks, here between melanophores and xantophores, generates a dynamics of the 'short-range activation and long-range inhibition' type. As a consequence, patterns with an intrinsic wavelength arise that can rightfully be called Turing patterns. The model we used to reach this conclusion is the simplest possible implementation of the mechanism put forth to explain the emergence of patterns on the skin of zebrafish. It is unlikely that this minimal model will reproduce all the experimental observations, but we can reasonably expect that including additional ingredients, such as the role of iridophores or limited cell motion, will allow for a finer modelling of the experiments, without threatening the above general conclusion. The mechanism by which the patterns emerge in the zebrafish is hence different from the proposal by Alan Turing. The proposed feedbacks consist in a combination of nonlocal biological interactions that affect the growth rates of the cells differently, depending on their surrounding. We propose to call this mechanism differential growth. The key element to the emergence of the pattern is that, although the single cells are immobile, differential growth induces an effective redistribution of cell populations in space. It would be of interest to assess whether similar feedbacks are at the heart of pattern formation in other instances of morphogenesis, so as to clarify the degree of universality of this mechanism. Moreover, efforts should be made to elucidate the exact physico-chemical nature of the cell-to-cell coupling to understand how the skin patterns enter the general framework of dissipative structures. Methods Simulation algorithm. To simulate the stochastic processes standing for the different steps of the proposed regulatory scheme, we used Kinetic MC simulations based on the following algorithm: 1. The substrate is modelled as a square lattice with periodic boundary conditions, composed of N 0 nodes having a coordinancy of 4 (that is, there are four first neighbours). Each of these nodes can be occupied by a xantophore (state X), by a melanophore (state M) or by none of these chromatophores (state S). An initial configuration of the lattice is chosen. In the simulations presented in this work, the lattice was initially uniformly covered in S. 2. A probability P k is associated with each of the elementary steps of the scheme. It is calculated as the rate constant of the process divided by the sum of all the rate constants. 3. One site of the lattice, say position i, is chosen at random and one of the processes is selected (with the appropriate probability). If the spatial configuration of the cells is in accordance with the chosen process, the composition of the lattice is changed accordingly. To be more precise: for the birth of a xantophore (melanophore), the site i must initially be in the state S and, if so, turns into the state X (M). If a short-range interaction is selected, the state of one randomly chosen first neighbour of i is inspected as well. For the melanophore-induced death of a xantophore, the site i must be in the state X, while its neighbour must be an M and the X becomes an S. In the case of xantophore-induced death of melanophores, the initially chosen location must be an M, and the neighbour an X. The M turns then into an S. Whenever the long-range interaction is chosen, one first makes sure that the selected site is in the S state. If so, another lattice site j is randomly chosen among all the sites that are at a distance h from the central site i. We implemented this step by randomly selecting an angle a between 0 and 2p; if {i x , i y } are the x and y coordinates of the location i, the coordinates of the location j are then calculated as {j x , j y } ¼ {i x þ h cos a,i y þ h sin a}. If the location j happens to be in the state X, then the state of site i changes from S to M. 4. Whatever the process, and whatever the result of the corresponding attempt, time is advanced by 1/N 0 and the algorithm goes back to 3. In some of the simulations presented in the main text, we simulated the presence of a band of iridophores, which inhibit the growth of melanophores on top of them. We implemented this feature by defining a region of space in the form of a horizontal line, where the birth of melanophores (either natural or induced by the long-range interaction with xantophores) is not possible. We keep this condition on for the whole duration of the simulation. Interactive JavaScript simulator: This paper is accompanied by Supplementary
2018-04-03T05:09:42.003Z
2015-05-11T00:00:00.000
{ "year": 2015, "sha1": "36fe20dd69b2160b3030fab2c103f02ee2816756", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms7971.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36fe20dd69b2160b3030fab2c103f02ee2816756", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13178301
pes2o/s2orc
v3-fos-license
Malaria and Helminth Co-Infections in School and Preschool Children: A Cross-Sectional Study in Magu District, North-Western Tanzania Background Malaria, schistosomiasis and soil transmitted helminth infections (STH) are important parasitic infections in Sub-Saharan Africa where a significant proportion of people are exposed to co-infections of more than one parasite. In Tanzania, these infections are a major public health problem particularly in school and pre-school children. The current study investigated malaria and helminth co-infections and anaemia in school and pre-school children in Magu district, Tanzania. Methodology School and pre-school children were enrolled in a cross-sectional study. Stool samples were examined for Schistosoma mansoni and STH infections using Kato Katz technique. Urine samples were examined for Schistosoma haematobium using the urine filtration method. Blood samples were examined for malaria parasites and haemoglobin concentrations using the Giemsa stain and Haemoque methods, respectively. Principal Findings Out of 1,546 children examined, 1,079 (69.8%) were infected with one or more parasites. Malaria-helminth co-infections were observed in 276 children (60% of all children with P. falciparum infection). Malaria parasites were significantly more prevalent in hookworm infected children than in hookworm free children (p = 0.046). However, this association was non-significant on multivariate logistic regression analysis (OR = 1.320, p = 0.064). Malaria parasite density decreased with increasing infection intensity of S. mansoni and with increasing number of co-infecting helminth species. Anaemia prevalence was 34.4% and was significantly associated with malaria infection, S. haematobium infection and with multiple parasite infections. Whereas S. mansoni infection was a significant predictor of malaria parasite density, P. falciparum and S. haematobium infections were significant predictors of anaemia. Conclusions/Significance These findings suggest that multiple parasite infections are common in school and pre-school children in Magu district. Concurrent P. falciparum, S. mansoni and S. haematobium infections increase the risk of lower Hb levels and anaemia, which in turn calls for integrated disease control interventions. The associations between malaria and helminth infections detected in this study need further investigation. Introduction Malaria, schistosomiasis and soil transmitted helminth infections (STH) are the most important parasitic infections in Sub-Saharan Africa, where a significant proportion of the populations including school children are exposed to these infections [1,2,3,4]. They are particularly more prevalent in rural communities and are closely associated with poverty [5,6,7]. In Tanzania, these infections are a major public health problem among school and pre-school children [8,9,10]. Malaria, caused mainly by P. falciparum occurs throughout the country with varying levels of endemicity. Stable and perennial transmission occurs in the warm humid coastal regions and around the great Lakes [11,12]. Schistosomiasis and STH infections also occur throughout Tanzania. Schistosomiasis is caused by Schistosoma mansoni and Schistosoma haematobium while the major STH infections are caused by the hookworms Necator americanus and Ancylostoma duodenale, Ascaris lumbricoides and Trichuris trichiura [8,13,14]. In the Lake Victoria basin, a prevalence of schistosomiasis exceeding 50% has been reported [8,15,16]. As a result of geographical overlap, malaria, schistosomiasis and the major STH (Hookworm, Trichuris trichiura and Ascaris lumbricoides) share not only the areas in which they occur, but also the human hosts [1,4,17,18,19,20]. Children co-infected with these parasites develop less than optimal, have reduced learning and school achievements [21,22] and have increased susceptibility to other infections [23,24,25]. Epidemiological studies have indicated that individuals co-infected with more than one parasite species are at risk of increased morbidity [26,27,28,29] as well as at a risk of developing frequent and more severe disease due to interactions among the infecting parasite species [1,23,24,30,31]. Despite existence of contrasting evidence [32], there is increasing evidence suggesting that individuals infected by helminth infections are more likely to develop clinical P. falciparum malaria than helminth free individuals [23,24,33]. Concurrent parasitic infections also jointly contribute to anaemia. Hookworm and T. trichiura infections are associated with anaemia due to blood and iron loss into the intestinal tract while S. mansoni and S. haematobium infections cause blood loss in faeces and urine, respectively [34,35]. Malaria contributes to decreased haemoglobin (Hb) concentrations and anaemia through a number of mechanisms including destruction of parasitized red blood cells, shortening of life span of non-parasitized red blood cells and decreased production of red blood cells in the bone marrow [36,37]. Considering the limited number of studies on interactions between malaria and helminth co-infections in human populations, the present study was undertaken to investigate the epidemiology of malaria and helminth co-infections and the prevalence of anaemia in school and pre-school children in Magu district, North-western Tanzania. Study area and population The study was conducted in Magu district, North-Western Tanzania. Magu district lies between 2u109 and 2u509 South of Equator and 33u and 34u East of Greenwich. It has an area of 3075 km 2 of which 1725 km 2 (56.1%) is covered by Lake Victoria waters. Mean temperature ranges from 18uC to 20uC during the rainy season and 26uC to 30uC during the dry season. Rainfall is bimodal with the short rains between October to December and heavy rains between March and May. Mean annual rainfall ranges from 700 m to 1000 mm. In 2003, the district had a population of 416,113 people of whom 202,077 (48.6%) were males [38]. The predominant ethnic group in Magu district is the Wasukuma who practise subsistence farming (animal husbandry and crops) and fishing in Lake Victoria. According to hospital records, malaria remains the number one cause of hospital admissions and child morbidity and mortality in the district. Malaria transmission occurs throughout the year with peaks during the two rain seasons. Magu district has many water bodies particularly in areas lying in the Lake Victoria basin which are ideal for snail habitats and mosquito breeding. The district is hyper-to holoendemic for malaria with transmission occurring throughout the year. Schistosomiasis and soil-transmitted helminthiasis are also endemic in the district [8,16]. The study took place between October to November, 2006. Six primary schools namely Mwamayombo, Nyashimo, Bulima, Milambi, Ihale and Ijitu were selected from the study area ( Figure 1) and included in the study. From each selected school, school and pre-school children aged 3-13 years were selected and included in the study. Ethical statement The study was approved by the Medical Research Coordination Committee (MRCC) of the National Institute for Medical Research (NIMR), Tanzania (Reference No. NIMR/HQ/R.8a/ Vol. IX/355). Before commencement of the study, the research team conducted meetings with leaders, teachers and community members of all selected villages. During these meetings, the objectives of the study including the study procedures to be followed, samples to be taken, study benefits and potential risks and discomforts were explained. Informed consent for all children who participated in the study was sought from parents and legal guardians after they have been clearly informed about the study. Parental consent given from parents/guardians was written. Children were also requested to give assent and were informed of their right to refuse to participate in the study and to withdraw at any time during the study without jeopardizing their right of access to other health services. Invasive procedures such as collection of blood samples were fully explained to parents and children and were carried out using sterile disposable materials. All children found infected with any of the parasites S. mansoni, S. haematobium, soil-transmitted helminthiasis and P. falciparum and those found with ailments not targeted by the project were treated free of charge according to national guidelines. Study identification numbers were used instead of children names and information collected was kept confidential. Feedback to the study population in the form of dissemination workshops was conducted during the course of the study. Collection and examination of stool, urine and blood samples Children were provided with plastic containers and requested to bring stool and urine samples on two consecutive days at about 10:00am in the morning. Stool samples were examined for S. mansoni and intestinal helminths (T. trichiura, A. lumbricoides and hookworm) using the Kato Katz method [39]. Duplicate smears (41.7 mg) were prepared from each stool sample. Intensity of infection for S. mansoni and intestinal helminths were expressed as the mean eggs per gram of faeces (epg) of the two samples (four smears). Urine samples were examined for S. haematobium eggs in 10 ml of urine according to the nucleopore filtration method [40]. Blood samples (approximately 3 ml) were collected using plain vacutainer tubes or disposable syringes. Thick blood smears were prepared, stained with Giemsa and examined microscopically for malaria parasites. Haemoglobin concentrations (Hb) were determined using a portable HaemoCue photometer. Anaemia was defined as Hb,120 g/L and Hb,80 g/L as severe anaemia. Quality control was performed by re-examining 10% randomly selected blood slides, urine filters and Kato smears by an experienced independent technician. Data analysis Data were double entered into Dbase V software (Borland International, Scotts Valley, California, USA) and analyzed using STATA Version 10 (STATA Corp., Texas, USA). Parasite counts were normalized by log transformation, averaged and then back transformed to the original scale. Infection intensities were calculated as geometric mean of eggs per gram of faeces for S. mansoni and hookworm infections, eggs per 10 ml of urine for S. haematobium and parasites per microlitre of blood for P. falciparum based on positive samples only. The student's t-test and one way analysis of variance (ANOVA) was used to compare geometric mean parasite counts and mean Hb concentrations where two or more than two groups were compared, respectively. For parasite counts, the student's t-test and ANOVA were performed on log transformed data of positive samples only whereas for Hb concentrations the students t-test and ANOVA were performed for all samples examined on original scale. The Chi-square test was used to compare proportions and to test for association between malaria prevalence, anaemia prevalence and prevalence of helminth infections between exposure groups. In the multivariate analysis, presence or absence of infection or anaemia was compared among schools, age groups, sexes and other infections using logistic regression analysis fitted as a generalized linear model with a logit link function and adjusting for possible clustering among siblings. All predictors were initially tested for significance separately and then jointly in a multi-variable model. Except for box plots which were drawn using STATA version 10, all other graphs were drawn using MS-Excel software. Tests were considered statistically significant at p,0.05. A total of 1615 school and pre-school children were examined. Pre-school children were 372 or 23% of all children examined. Children where complete information was available were included in the analysis (1546) of whom 759 (49.1%) were boys. Overall mean age was 7 years. Parasite prevalence and infection intensities Out of the 1546 children included in the analysis, 1079 (69.8%) were infected with at least one of the parasites P. falciparum, S. mansoni, S. haematobium, hookworm and T. trichiura. S. mansoni infections were generally light to moderate with only 59 children (9.6%) being heavily infected (epg$400). Whereas 94 children (3.8%) had heavy S. haematobium infections ($50eggs/10 ml of urine), all hookworm infections were light (epg,2000). Three children (0.2%) were infected with T. Trichiura and Ascaris lumbricoides infections were absent. S. mansoni infection was the The prevalence of S. mansoni, S. haematobium and hookworm infections differed significantly across age groups (p,0.001) whereby older children (6-8 years and 9-13 years) had higher prevalence of infection compared to younger children (3-5 years). Likewise, the infection intensity of S. mansoni and hookworm differed significantly across age groups (p,0.001) whereby children in higher age groups had higher parasite loads compared to children in the lower age group. The prevalence and infection intensity of S. mansoni, S. haematobium and hookworm also differed significantly across schools (p.0.001). Malaria prevalence varied considerably among schools (p,0.001) being highest in Mwamayombo and Milambi compared to other schools. Younger children had higher malaria parasite density compared older children (p,0.001). Prevalence of co-infections Out of the 1079 infected children, 430 (39.9%) harboured more than one parasite species. Overall, S. mansoni infections occurred as single as well as a multiple species infection in almost equal proportions (18.8% and 20.95, respectively). P. falciparum, S. haematobium and hookworms infections occurred more frequently as multiple species infections than single species infections. Figure 2 summarize the prevalence of single and multiple parasite species infections by age groups. Figure 2 shows that multiple parasite infections occurred more frequently in older children (9-13 years) compared to younger children (3-5 and 6-8 years) (x 2 = 51.07, p,0.001). Associations between parasite infections The prevalence of helminth co-infections among P. falciparum infected children was 60% (276/460). The most common parasite combinations were P. falciparum and S. mansoni (27.2%), P. falciparum and S. haematobium (10.2%), P. falciparum and hookworm (7.4%), P. falciparum, S. mansoni and S. haematobium (7%), P.falciparum, S.mansoni and hookworm (6.5%) and P. falciparum, S. haematobium and hookworm (3.0%). Malaria and helminth coinfections occurred more frequently in older children (9-13 years) compared to younger children (3-5 and 6-8 years) and the difference was significant (x 2 = 19.34, p,0.001). Malaria parasites were significantly more prevalent in hookworm infected children than in hookworm free children (35.1% vs 28.8%) (x 2 = 3.98, p = 0.046). However, this association turned to be non-significant when multivariate logistic regression analysis was performed while adjusting for other confounding factors (OR = 1.320, p = 0.064). Children with hookworm infection were more likely to be infected with S. haematobium (x 2 = 7.52, p,0.01) compared to children who were not infected with hookworm. Further, children infected with hookworm were also likely to be infected with S. mansoni (x 2 = 6.40, p = 0.011) compared to children who were not infected with hookworm. The prevalence of malaria parasites tended to increase with increasing number of co-infecting helminth species. The prevalence of malaria parasites was 29%, 35% and 41.2% in children harbouring one, two and three helminth specie, respectively, compared to 28.3% in helminth free children. However, the difference was not significant (x 2 = 5.63, p = 0.131). Association between malaria parasite density and helminth infections Except for hookworm infection, malaria parasite density was negatively correlated with helminth infections (prevalence and infection intensity). Figure 3 shows the relationship between malaria parasite density and S. mansoni infection while figure 4 shows the relationship between malaria parasite density and the number of co-infecting helminth species. Malaria parasite densities tended to decrease with increasing infection intensity of S. mansoni. Geometric mean malaria parasite density for children without S. mansoni infection was 745 (95% CI 633-879) and was significantly higher compared to 551 (95% CI 434-700) and 399 (95% CI 297-534) for children with light and moderate to heavy S. mansoni infection, respectively (F = 6.9, p,0.01) (Fig. 3). Figure 4 shows the relationship between malaria parasite densities and overall helminth infections. Malaria parasite Prevalence of anaemia and association with infection status Out of the 1546 children, 532 (34.4%) were anaemic and only 16 (1.0%) were severely anaemic. Overall mean Hb concentration was 123.7 (95% CI 123.0-124.4). The prevalence of anaemia was significantly associated with malaria infection (x 2 = 15.58, p,0.001) and S. haematobium infection (x 2 = 16.34, p,0.001). For P. falciparum and S. haematobium infections, mean Hb levels decreased significantly with increasing infection intensities (p,0.01). Children who were not infected with any parasite had the highest mean Hb levels (126.0, 95% CI 124.7-127.2) and hence the lowest prevalence of anaemia (27.2%). Except for P. falciparum and hookworm co-infections, children who were infected with more than one parasite species tended to have lower mean haemoglobin levels and hence higher prevalence of anaemia compared to children infected with one parasite. The highest prevalence of anaemia (60%) was observed in children co-infected with three parasites P. falciparum, S. mansoni and S. haematobium ( Figure 5). There was a significant difference in the prevalence of anaemia and mean haemoglobin levels between uninfected children and those infected with one or more parasites (p,0.01). P. falciparum and S. haematobium infections were significant predictors of anaemia after adjusting for age and sex ( Table 2). Discussion Malaria, schistosomiasis and STH are a major public health problem particularly to school and pre-school children in Sub-Saharan Africa where their occurrence as multiple species infections is known to be the norm. Understanding the epidemiology of these infections among school and pre-school children and their joint contribution to lower haemoglobin levels and anaemia is important as findings may support design of integrated disease control strategies. Results of this study demonstrated that malaria, schistosomiasis and soil-transmitted helminth infections are prevalent in school and pre-school children in Magu district and co-infections of these parasites were common. These findings are supported by other studies in the Sub-Saharan Africa [41,42,43,44,45]. The most prevalent parasite species in the studied population were S. mansoni, P. falciparum and S. haematobium. The major STH infections hookworm and T. trichiura were the least prevalent. Ascaris lumbricoides was not detected in the current study. This observation concurs with findings of the study of Lwambo et al [8] which reported this specie to be rare and is in line with the known distribution of A. lumbricoides in Sub-Saharan Africa [46]. The observed prevalence of S. mansoni and S. haematobium are in accordance with previous studies in the area and is related to the occurrence of the snail intermediate hosts for S. mansoni and S. haematobium and their ecological preferences [8,14,15,47]. The low prevalence of STH infections in the studied population could be as a result of the relatively younger age of most of children examined as the prevalence of STH particularly hookworm peaks in early adulthood (34). For schistosomiasis and hookworm infections, the observed prevalence and infection intensity were generally age dependent which reflects the fact that infection levels are explained by water contact patterns, duration of exposure to infection and acquired immunity [35,48,49,50]. The study also observed significant variation among schools of both prevalence and infection intensities of S. mansoni and S. heamatobium which could be explained by variations in exposure, focal nature of schistosomiasis and the over-dispersed distribution of heavy and light infections between and within communities [35]. Malaria parasite densities decreased with increasing age which is a normal trend in malaria endemic areas and is related to development of anti-malarial specific immunity [51]. In addition to single parasite infections, this study also demonstrated that co-infections are very common in the study area and interactions exist among them. Majority of children who were infected with P. falciparum were concurrently infected with one or more helminth species. In the bivariate analysis, hookworm infection was found to have a positive association with malaria infection and malaria parasite density. However, this association was not confirmed by multivariate logistic regression analysis and hence needs further investigation. Previous studies which found of a positive association between malaria and hookworm infection include Humphries et al [18], Nacher et al [23], Hillier et al [25], Spiegel et al [33], and Yatich et al [52]. However, the study of Shapiro et al had contrasting findings [32]. Studies which favor the existence of a positive association between hookworm and malaria infection propose various underlying mechanisms. There is evidence suggesting that environmental, socio-economic and behavioural factors could act as shared risk factors for exposure to both infections [4,19,25], and the involvement of immunological mechanisms which may lead to increased susceptibility of helminth infected individuals to P. falciparum infection [24,33,53]. On the other hand, this study showed a negative association between S. mansoni and S. haematobium infections and malaria parasite intensity, in line with observations made by Lyke at al [54] and Briand et al [49] in Mali and Senegal, respectively. The study of Lyke et al [54] demonstrated that S. haematobium infected children had lower geometric mean malaria parasite density compared to children without S. haematobium infection. The study of Briand et al [49] showed that children with light infection of S. haematobium had lower P. falciparum parasite densities compared to those not infected. One possible explanation for this observation could be cross reactivity between anti-P. falciparum antibodies and anti-schostosomal antibodies as has been reported for S. mansoni and P. falciparum specific antibodies [55,56,57]. Anaemia was prevalent in the study area though at a relatively low level compared to what was reported by the study of Lwambo et al [8] which reported an overall prevalence of anaemia of up to 62.4%. This observation may reflect a changing pattern in prevalence of anaemia and the distribution of helminth infections (prevalence and infection intensity) in the study area. Another possible explanation could be the difference in age distribution of children who participated in the two studies. While the current study enrolled children between 3 to 13 years, the study of Lwambo et al [8] enrolled children between 7 to 20 years. Majority of anaemia cases in the current study were moderate. Only 16 children (1%) had severe anaemia probably due to the fact that majority of helminth infections were also light. This observation is in agreement with findings of Ajanga et al [15], Lwambo et al [58] and Koukounari et al [59], who observed that anaemia due to heminth infections is dependent on intensity of infection. As expected and in accordance with findings of other studies [8,34,48,60,61,62,63], lower Hb concentrations and anaemia was associated with single and multiple parasitic infections. Although the aetiology of anaemia is multifactorial, parasitic infections are known to be among major causes [41,59,60]. While P. falciparum infection causes anaemia through complex mechanisms including destruction of parasitized red blood cells, decreased production of red blood cells (RBCs) and/or dyserythropoiesis [36,51,64], S. mansoni, S. haematobium and hookworm infections cause anaemia through chronic blood loss [34,35,41,48]. In contrast to previous studies [7,8,48,60,62,65,66], hookworm was found not to be associated with anaemia probably due to the relatively low infection intensities of hookworm infection detected in the studied population. Further, multiple logistic regression analysis showed that malaria and S. haematobium infections were predictors of anaemia, a finding which indicates that in addition to the known effect of single parasite species on anaemia, multiple parasite infections can interact to enhance the risk of anaemia. Interestingly, the highest prevalence of anaemia (60%) was observed in children concurrently infected with P. falciparum, S. mansoni and S. haematobium, and in children concurrently infected with P. falciparum and S. haematobium (52.3%). Anaemia was also more prevalent in children concurrently infected with three or four parasites compared to those with only one or no parasite infection. These observations demonstrate a possible synergistic interaction of P. falciparum, S. mansoni and S. haematobium and multiple parasite infections as the aetiology of anaemia. Limitations of the current study in elucidating associations between malaria and helminth co-infections include the lack of information on household, socioeconomic status and environmental factors which have been shown to influence occurrence of co-infections by other studies [19,52]. The lack of information on other causes of anaemia such as malnutrition was another limitation. Overall, results of this study have demonstrated that malaria, schistosomiasis and STH infections are prevalent in school and pre-school children in Magu district and that polyparasitism is also very common. These findings also suggest that concurrent P. falciparum, S. mansoni and S. haematobium infections increase the risk of lower Hb levels and anaemia which in turn calls for integrated disease control interventions. The associations between malaria and helminth infections detected in this study were not conclusive and hence needs further investigation.
2017-05-21T05:45:23.577Z
2014-01-29T00:00:00.000
{ "year": 2014, "sha1": "8ff15e7397c2200c59e1188010eb6aef3d0309e3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0086510&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ff15e7397c2200c59e1188010eb6aef3d0309e3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
261430733
pes2o/s2orc
v3-fos-license
Effects of an intravenous lidocaine bolus before tracheal extubation on recovery after breast surgery – Lidocaine at the End (LATE) study: a randomized controlled clinical trial Aim To investigate whether IV lidocaine improves emergence, early recovery, and late recovery after general anesthesia in women who undergo breast surgery. Methods Sixty-seven women with American Society of Anesthesiologists physical status I-II, scheduled for breast surgery were randomized to receive an IV lidocaine 1.5 mg/kg bolus (n = 34) or saline placebo (n = 33) before tracheal extubation. Anesthesia was induced with thiopental, vecuronium, and fentanyl, and maintained with sevoflurane ~ 1 MAC and 50% nitrous-oxide in oxygen. No postoperative nausea and vomiting (PONV) prophylaxis was given. Time to extubation, bucking before extubation, and quality of emergence, as well as early and late recovery (coughing post-extubation, sore throat, PONV, and pain scores) within 24 hours postoperatively were evaluated. Diclofenac and meperidine were used for the treatment of pain and metoclopramide for PONV. Results The groups did not significantly differ in demographics, intraoperative data, or PONV risk scores. Extubation was ~ 8 minutes in both groups. Patients who received IV lidocaine had significantly smoother recovery, both statistically and clinically; they had better extubation quality scores (1.5 [1-3] vs 3 [1-5], P < 0.001), less bucking before extubation (38% vs 91%, P < 0.001), less coughing after extubation (at 1 min 18% vs 42%, P = 0.026; and at 24 hours 9% vs 27%, P = 0.049), and less sore throat (6% vs 48%, P < 0.001). Late PONV decreased (3% vs 24%, P = 0.013). There were no differences in pain scores and treatment. Conclusion In women who underwent breast surgery, IV lidocaine bolus administered just before extubation attenuated bucking, cough and sore throat, and PONV for 24 hours after general anesthesia, without prolonging the emergence. ISRCTN: 71855856. Mraovic and Simurina: Effects of intravenous lidocaine bolus before tracheal extubation on recovery after breast surgery www.cmj.hrSmooth emergence and recovery from anesthesia are an important part of anesthesia practice.Avoiding bucking and coughing on emergence reduces multiple postoperative complications ranging from wound dehiscence and hematomas, especially in plastic and hernia repair surgeries, to an increase in hemodynamic response, which could lead to acute cardiovascular events and increased intraocular and intracranial pressure.One way of reducing bucking and coughing on emergence is IV lidocaine given before endotracheal extubation.A recent meta-analysis of 16 studies (n = 1516) showed that early recovery was improved by administration of 1-1.5 mg/kg of IV lidocaine (1).The metaanalysis included all studies regardless of timing when lidocaine was given, either preoperatively (pre-induction or on induction) or intraoperatively (from the end of surgery, on wound closure, on reversal of neuromuscular blockade up to two minutes before extubation).It showed large reductions in post-extubation cough (risk ratio [RR] 0.64; 95% confidence interval [CI] 0.48-0.86)and in postoperative sore throat at 1 h (RR 0.46; 95% CI 0.32-0.67)but not in late recovery or post-operative nausea and vomiting (PONV) (1). Since the data about the influence of pre-extubation lidocaine on early and late recovery are sparse, we investigated whether IV lidocaine would improve emergence and prevent complications in early and late recovery after general anesthesia in women undergoing breast surgery.The primary outcome was the quality of the emergence after general anesthesia (bucking and coughing).The secondary outcomes were PONV and postoperative pain. PaRTICIPaNTS aNd MeThodS All procedures were performed at Zadar General Hospital, a single-center regional medical hospital, between July 2007 and December 2013.This study was retroactively registered in ISRCTN because at the time when it was approved by an institutional review board and recruitment started, registration was not required.Written informed consent was obtained from all patients before any study procedures were performed.This prospective, doubleblinded, randomized controlled trial was approved by the Institutional Ethics Committee, General Hospital Zadar (01-1280/07).The study was conducted in accordance with the principles of the Declaration of Helsinki. Participants We enrolled 67 adult women with American Society of Anesthesiologists physical status I to II undergoing breast tu-mor surgery (lumpectomy, simple mastectomy, radical or modified radical mastectomy).Exclusion criteria were expected difficult intubation, multiple intubation attempts, the history of sore throat, cough and respiratory infections, or having been intubated within the last two months, chronic obstructive lung disease, asthma, treatment with β-blocking agents, known hypersensitivity to drugs used in the study protocol, and obesity (body mass index >30 kg/m 2 ).Predictors for PONV were also assessed.Patients were not included if the protocol was broken or conditions arose that influenced outcomes during the surgery, such as unexpected intraoperative drug allergy, severe intraoperative hypotension lasting more than three minutes, perioperative hypoxia lasting more than one minute, excessive blood loss, difficult intubation, or serious postoperative surgical complications. Management of anesthesia All patients received 7.5 mg of midazolam per os (PO) 1 hour before the surgery with no prophylactic antiemetics, which was the standard of care at the hospital at the time of the study.We used standard monitoring, including electrocardiography, noninvasive blood pressure, pulse oximetry, and capnography.Anesthesia was induced with thiopental 5 mg/kg, fentanyl 1 μg/kg, and vecuronium 0.1 mg/ kg.Patients were manually ventilated via face mask with oxygen 6 L/min for three minutes before intubation.An endotracheal tube (ETT) 7.5 mm was used.ETT cuff pressure was measured with an analog manometer to avoid increases higher than 20 cmH 2 O owing to diffusion of nitrous oxide (N 2 O).Anesthesia was maintained with sevoflurane 0.8%-1.5% concentration ~ 1 minimum alveolar concentration (MAC) and 50% N 2 O in oxygen with fresh gas flow 2 L/min.Supplemental bolus doses of fentanyl (1 μg/ kg) were added to keep heart rate (HR) and blood pressure within 20% of baseline values but not within the last 30 minutes before extubation.Additional vecuronium was added to maintain one to two twitches on the train-of-four (TOF) monitor.Patients' lungs were mechanically ventilated to maintain normocapnia (end tidal CO 2 ~ 35 mm Hg).All patients received about 10 mL/kg/h of crystalloids during surgery. Randomization Patients were randomized by computer-generated random numbers to receive either lidocaine (GLido) or the same volume of saline (GSaline) on the emergence from anesthesia.At the end of skin closure, volatile Croat Med J. 2023;64:222-30 www.cmj.hranesthetic and N 2 O were switched off, neostigmine 2.5 mg and atropine 1 mg for neuromuscular blockade reversal were given, and the fresh gas inflow rate was increased to 7 L/min of 100% oxygen.At that time, patients received 2% lidocaine (1.5 mg/kg) or the same amount of saline.No physical stimulations were applied during emergence.Patients were extubated when they achieved a spontaneous respiratory rate greater than 10 per minute with end-tidal CO 2 ≤45 mm Hg and minimum tidal volume of 300 mL, reversal of neuromuscular blockade TOF 4 with sustained tetany, eye opening, strong hand grip, or when demonstrating intentional movement of the extremities, raising the head, or attempting to self-extubate. outcome measures Early recovery times (from delivery of 100% oxygen to first eye opening and response to verbal commands, to extubation and orientation to time and place) were recorded by a blinded anesthesiologist who did not perform anesthesia.A simple verbal order to "open your eyes, " "squeeze my hand, " "open your mouth, " or "stick out your tongue" was given every 15 seconds during emergence.The number of bucking episodes before extubation and the number of cough and sore throat episodes at one, three, and five minutes after extubation, as well as at two and 24 hours postoperatively were recorded.The quality of tracheal extubation was evaluated on a five-rating scale (1 = no cough or buck; 2 = very smooth, minimal coughing; 3 = moderate coughing; 4 = large degree of coughing or bucking; and 5 = poor extubation, very uncomfortable) (2).Systolic, diastolic, and mean blood pressures, heart rate (HR, beats per minute [bpm]) before induction of anesthesia and immediately before and after extubation were also measured. An anesthesiologist who was blinded to the study protocol noted complications during emergence in operating room (OR): laryngospasm, bronchospasm, cessation of breathing (number of episodes and duration in seconds), cyanosis, SpO 2 <90%, aspiration of gastric contents and secretions, severe restlessness/agitation (bites ETT, throws self around, kicks hands and feet), vomiting and/or retching, hypertension >35% of preoperative values, tachycardia >35% of preoperative values, bradycardia, or hypotension.The same blinded anesthesiologist estimated the quality of emergence (0 = calm patient, no adverse events, cooperative patient; 1 = calm patient, one adverse event, patient trying to cooperate; 2 = restlessness, one adverse event, uncooperative patient; 3 = restless, at least two adverse events, trying to cooperate; 4 = restless, at least two complications, uncooperative patient). A modified Aldrete score (0-10 points) was used to determine OR and post-anesthesia care unit discharge readiness (≥9 points).Postoperatively, patients were allowed to drink after three hours, if tolerated.All patients stayed in hospital for at least 24 hours.The incidence of postoperative nausea (PON), vomiting (POV), and the use of rescue antiemetic were assessed at two and 24 hours after surgery.Patients were considered to have had PONV if they experienced at least one episode of nausea, vomiting, or retching, or any combination of these during the initial 24 postoperative hours.POV was defined as at least one episode of vomiting or retching that occurred within 24 postoperative hours.PONV was defined as early (within the first two hours) or late (2-24 postoperative hours).Severity of pain was evaluated with a 100-mm visual analog scale (VAS) at the same time points (0 = no pain to 100 = maximum pain).The same blinded anesthesiologist collected postoperative data.Rescue antiemetic (metoclopramide 0.4 mg/kg IV) was given to patients who experienced two or more episodes of vomiting and/ or retching within 30 min, any nausea lasting more than 15 min, or nausea VAS score 50 mm or greater, or when they requested treatment.The pain VAS score and the total amount of postoperative opioids were recorded at two and 24 h after surgery.Diclofenac 75 mg intramuscular injection (IM) was given immediately at the arrival to the recovery room.For severe pain (VAS>40 mm), meperidine up to 100 mg IV was used and repeated four hours later if needed. Statistical analysis The sample size was calculated on the assumption that the incidence of coughing at the end of the anesthesia would be in two-thirds of patients (66%) and that lidocaine would decrease by 50% (to one third of the patients, 33%), thus 33 patients in each group for the primary outcome were needed for a power of 0.8 and alpha level of <0.05.We estimated a 10% drop-out rate.Data are presented as mean (SD) or median (range).The normality of distribution was tested with a D' Agostino-Pearson test.The significance of differences between the categorical variables was assessed with a Fisher exact test or chi square test, and that between continuous variables with a Mann-Whitney test or t test.P < 0.05 was considered significant.Data analysis was performed with IBM SPSS Statistics, v. 22.0 (IBM Corp, Armonk, NY, USA). ReSulTS Of 73 women who were initially screened for eligibility, six were excluded.Figure 1 presents the flowchart of the study enrollment and the reasons for the exclusion.Of the remaining 67 patients, 34 were randomized into GLido and 33 into GSaline.No significant differences were observed between the groups in demographics, PONV risks (Table 1), and preoperative data (Table 2).Surgery time was significantly shorter (on average 17 minutes) in GLido, but there was no significant difference in anesthesia times even though anesthesia lasted 13 min less in GLido (Table 2). A IV lidocaine bolus on emergence was administered about eight minutes before extubation in both groups and did not delay extubation.All emergence variables, either by subjective (emergence and extubation scores) or objective criteria (bucking and hemodynamics), were significantly improved (P < 0.001) (Table 3).The greatest improvement was observed in the number of patients who bucked before extubation (91% vs 38%) in GSaline vs GLido, respectively.Furthermore, in GSaline 10 patients had 10 or more episodes of bucking (up to 19 episodes), while only one patient in GLido had more than five episodes (only seven episodes).The average number of bucking episodes was clinically and statistically significantly lower in GLido compared with GSaline (0.97 ± 1.64 vs 5.89 ± 4.99; median 0 [0-7] vs 5 [0-19], P < 0.001).The Extubation Quality Score was on average clinically and statistically significantly higher in GLido (1.58 ± 0.66 vs 3.15 ± 1.00; median 3). Lidocaine also significantly improved recovery after extubation (Table 4).Post-extubation coughing within the first minute was significantly less frequent in GLido vs GSaline (18% vs 42%; P = 0.026).Moreover, only one patient in GLido had more than one coughing episode (three episodes) compared with 10 patients in GSaline (two had 11 and 12 episodes).The average number of coughing episodes was significantly lower in GLido (1.33 ± 0.82 vs 3.93 ± 3.54; median 1 [1-3] vs 3 [1][2][3][4][5][6][7][8][9][10][11][12]).Beyond the first minute, there was no difference in early coughing between groups, but there was a difference between two and 24 hours (Table 4).Lido-caine reduced the incidence of sore throat tremendously; only two patients reported sore throat in the first 24 h vs 10 patients in GSaline.Also, the total number of patients with complications on emergence was reduced in GLido (18% vs 55%; P = 0.002) (Table 4).All patients in GLido had an Aldrete score of 10, while four patients in GSaline had a score of 9 on exit from the OR.The difference was statistically (P = 0.039) but not clinically significant.The hemodynamic response on emergence and extubation was significantly attenuated in the GLido group (Table 3).An increase in HR in GLido was only 3 bpm, while in GSaline it was 13, P < 0.001. IV lidocaine bolus on emergence reduced PONV at all measurements, but a significant difference was observed only in late PONV (3% vs 24%, P = 0.13) and late PON (3% vs 21%, P = 0.27) (Table 5).All patients in both groups received diclofenac IM at the end of the surgery.There were no differences in postoperative pain or pain medications between the groups (Table 6).No signs of lidocaine adverse effects or toxicity were noted in any patient. dISCuSSIoN The results of our Lidocaine at the End (LATE) study showed profound effects of IV lidocaine on early and late recovery without delaying extubation when given at the end of the surgery after the skin closure was completed, neuromuscular blockade reversal was given, and anesthetic gases were turned off. The most striking improvement was the decreased number of patients who bucked before extubation by more than half (GSaline 91% vs GLido 38%), number needed to treat (NNT) of two, as well as the number of bucking episodes (GLido 0 [0-7] vs GSaline 5 [0-19]).Early coughing was also improved (GSaline 42% vs GLido 18%), with a NNT of four.This agrees with the findings of a recent meta-analysis, which showed a NNT of five for postoperative cough (1).To our knowledge, this is the first study to separate bucking before extubation and coughing after extubation, as well as to frequently measure coughing after extuba-tion.We believe that it is important to report bucking before extubation separately, not only for research purposes but also from a clinical standpoint.For instance, anesthesia personnel could respond to patients having episodes of multiple/severe bucking by extubating them before extubation criteria are reached, thus putting them at risk of respiratory complications.Reducing bucking by more than half could significantly improve outcomes, especially in patients who would be completely awake before extubation (difficult intubation or with sleep apnea) after surgery, which demands avoiding bucking.Indeed, in our study, all combined emergence complications were decreased 3-fold (GSaline 55% vs GLido 18%), with NNT 2.Moreover, the quality of emergence measured by a blinded anesthesiologist was smoother in patients who received IV lidocaine, a finding that could be important in high-risk patients. Although the exact mechanism of how IV lidocaine reduces bucking and coughing is not well understood, the timing of its administration is important.If IV lidocaine is given too early, its effect could already be dissipating.Indeed, IV lidocaine did not reduce post-extubation cough when it was given on average 21 minutes before extubation (3).Even 10 minutes before extubation might be too early (4,5).On the contrary, if it is given too close to extubation it might not work either.For example, when IV lidocaine was given just three minutes before extubation, it had no effect on coughing (6).Between five (7) and eight minutes (our LATE study) before extubation could be a sweet spot for reducing bucking and coughing on emergence.In our study, lidocaine was given when the surgery was completed, gasses were turned off (sevoflurane/50%N 2 O), and reversal was administered, which made extubation time ( ~ 8 minutes) highly predictable.This timing might correlate with IV lidocaine peak plasma concentration after a single IV bolus dose with a rapid decline after five minutes (8). Our study confirmed the beneficial effect of IV lidocaine on postoperative sore throat.It almost completely eliminated it, with only two patients reporting sore throat within 24 h, with a NNT of 2.4.The meta-analysis (1) showed a NNT of four.The difference could be explained by the timing of IV lidocaine administration because the meta-analysis analyzed all perioperative administrations of IV lidocaine including preinduction.Also, as in most previous studies, IV lidocaine's influence on hemodynamics was confirmed. Changes in blood pressures were on average 10 mm Hg lower in GLido than in GSaline after extubation.This finding might not be clinically significant, but HR was also lower by 10 bpm, which could be clinically significant.HR in GLido was practically unchanged with a delta of only 3 bpm. The influence of IV lidocaine bolus administered on emergence on PONV was less investigated, and the evidence was of very low quality.A meta-analysis of perioperative infusion of lidocaine showed a reduction in PONV (9).The incidence of early PONV was 20.1% (45:218) of participants in the lidocaine group and 28.4% (63:222) of participants in the control group, with a NNT of 12. Late PONV (within 72 h postoperatively) occurred in 26.6% (154: 545) of lidocaine participants and in 35.6% (192:539) of control participants, with a NNT of 11.A meta-analysis of IV bolus of lidocaine in pediatric patients showed that intravenous lidocaine may reduce the incidence of PONV, but the quality of the evidence was very low (10).The incidence of PONV within 24 hours after anesthesia was 3.73% in the lidocaine group and 4.87% in the control group (NNT of 100) (10).In our study, lidocaine reduced the incidence of PONV within the first 24 h almost by half (GSaline 39.3% vs GLido 20.6%, NNT of 5.5), but the difference was not significant, probably because the study was underpowered.All other incidences were reduced in both early and late POV/PON but only late PONV reached significance.IV lidocaine bolus also borderline non-significantly reduced the use of rescue antiemetics (GSaline 30.2% vs GLido 8.8%, NNT of 4.5).This suggests that IV lidocaine may clinically significantly reduce PONV and the use of antiemetics.One of the differences between studies is the type of surgery and the use of PONV prophylaxis.In our study, we did not use any PONV prophylaxis, which was the standard in our hospital at the time of the study.This gave us a "raw" effect of IV lidocaine bolus, but its clinical benefit may be attenuated by routine use of antiemetics. Limitations of our study include that it was done in one center, in only one type of surgery, and during relatively short duration of surgery.Our results may not be generalized to other types of surgery.Since we administered IV lidocaine at the end of the surgery, one may expect similar results in longer surgeries.Another limitation is the small number of patients in each group, which may render the study underpowered for the secondary outcome (PONV), as discussed above.The strength of our study is that anesthesia emergence was standardized and extubation timing was highly predictable.We believe that it is important to separate extubation bucking from postoperative coughing.These conditions may have different etiologies: pre-extubation bucking is mostly, if not exclusively, caused by direct ETT stimulation of the trachea but post-extubation coughing is mostly caused by secretions and injury of the tracheal mucosa.We think that in future studies preextubation bucking should be separately reported from post-extubation/post-operative coughing.An additional strength is that our study showed "raw" (without any PONV prophylaxis) effect of IV lidocaine on PONV. In summary, our LATE study showed an overwhelming beneficial effect of an IV lidocaine 1.5 mg/kg bolus given eight minutes before extubation on bucking, coughing, sore throat, increase in HR and BP, as well as PONV without any delay in extubation time. In accordance with the ICMJE recommendations (http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trialregistration.html), the authors provided an explanation for retrospective registration.Furthermore, ethical approval was given by an authorized committee before patients' recruitment, and none ethical concerns were present.Having all this in mind, the CMJ editors decided that the "failure to appropriately register a clinical trial" was not "intended to or resulted in biased reporting" and the manuscript was deemed acceptable for publication. Funding from department resources only. declaration of authorship both authors conceived and designed the study; TS acquired the data; both authors analyzed and interpreted the data; both authors drafted the manuscript; both authors critically revised the manuscript for important intellectual content; both authors gave approval of the version to be submitted; both authors agree to be accountable for all aspects of the work. Competing interests All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (avail- FIguRe 1 . FIguRe 1. Consolidated Standards of Reporting Trails flow diagram of the study. TaBle 1 . demographic data of patients who received an IV lidocaine 1.5 mg/kg bolus or saline placebo* † *abbreviations: aSa -american Society of anesthesiology; h/o -history of; PoNV -postoperative nausea and vomiting.†dataare expressed as mean ± standard deviation, median (min-max value) or n (%). TaBle 4 . Recovery data of patients who received an IV lidocaine 1.5 mg/kg bolus or saline placebo* *data are expressed as n (%) or median (minimal-maximal value). TaBle 5 . Postoperative nausea and vomiting (PoNV) data of patients who received an IV lidocaine 1.5 mg/kg bolus or saline placebo* TaBle 6 . Postoperative pain of patients who received an IV lidocaine 1.5 mg/kg bolus or saline placebo*
2023-09-02T06:18:08.023Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "8974501ae8d5c884dcabdfc07bed99ac675889ae", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cb6d350e24359a960d91633d40abdd6f114a8ef1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246885415
pes2o/s2orc
v3-fos-license
Bilateral Posterolateral Sulcus Approach for the Removal of Spinal Intramedullary Metastatic Adenocarcinoma: A Technical Case Report Spinal intramedullary metastasis is an extremely rare event that occurs in advanced cancer. The surgical indications for spinal intramedullary metastasis are highly limited because of surgical difficulty and poor prognosis. In this technical case report, we present a rare case of spinal intramedullary metastasis from the lung that recurred late after local radiation to the spinal cord. The patient progressively experienced relapsed buttock pain and developed gait and urination disorders late after treatment for lung cancer. Imaging examinations suggested the recurrence of spinal intramedullary metastasis in the conus medullaris. Systemic examinations revealed no apparent recurrence in other organs, including the primary lung lesions. Gross total resection of the tumor within the conus medullaris was safely performed using the unilateral posterolateral (PLS) approach and by addition of the contralateral PLS approach. To the best of our knowledge, this is the first case in which a spinal intramedullary metastatic tumor was successfully removed using a bilateral PLS approach. Introduction Spinal intramedullary metastasis (SIM) is an extremely rare event in advanced cancer. [1][2][3][4][5][6][7][8] SIM often corresponds to cancer stage 4 when diagnosed, making it difficult for surgeons to determine the surgical indications. In this technical case report, we present a rare case of SIM from the lung that recurred late after local radiation to the spinal cord. Gross total resection of the tumor was safely performed using bilateral myelotomy via the posterolateral sulcus (PLS) on both sides, which is equivalent to the dorsal root entry zone (DREZ) myelotomy (Fig. 1). [9][10][11][12] To safely remove the spinal intramedullary metastatic tumor, we described the successful application of a bilateral PLS approach. Clinical Report History and presentation A 56-year-old woman was diagnosed with lung cancer 7 years ago. At that time, the cancer stage was 3A, with T-1b, N-2, and M-0. She underwent thoracoscopic left upper lobectomy, the pathological examination of which resulted in a diagnosis of lung adenocarcinoma. After surgery, the patient received postoperative chemotherapy. The following year, she complained of numbness in her buttocks and posterior thighs and finally developed gait and urination disorders. Imaging examinations suggested the presence of an intramedullary tumor in the conus medullaris ( Fig. 2A and B). She received local radiation therapy (53 Gy/29 Fr) for a possible diagnosis of SIM of lung adenocarcinoma and was further treated with molecular targeted therapy using epidermal growth factor receptor-tyrosine kinase in- hibitors for 2 years. Her symptoms gradually resolved, and she recovered from the gait and urination disorders (Fig. 2 C and D). Approximately 5 years after radiation therapy, the patient progressively experienced relapsed buttock pain, gait, and urination disorders. Neurological examinations showed moderate weakness and severe sensory impairment in both lower limbs. Imaging examinations suggested a recurrence of the intramedullary lesions in the conus medullaris (Fig. 2E, F and G). Systemic examinations including chest and abdominal contrast-enhanced computed tomography revealed no apparent recurrence in other organs, including the primary lung lesions. Surgery The patient was placed in a prone position under general anesthesia. The thorax was elevated 15°, and her head was maintained in neutral flexion without rotation. Transcranial motor-evoked potentials and sensory-evoked potentials were routinely assessed for intraoperative neurophysiological monitoring. Laminectomy was performed in the usual en bloc manner. The laminectomy was long enough to expose the entire lesion and widened to the medial pedicular surface. The dura mater was opened while preserving the arachnoid membrane. The arachnoid membrane was also opened with care to avoid damage at the points of arachnoid adhesion or vascular connection. Mild swelling of the spinal cord was confirmed upon initial inspection (Fig. 3A), and it was assumed that the posterior median sulcus (PMS) approach would be technically challenging. As an alternative, the PLS approach from the left side was finally selected. A linear incision along the PLS on the left side was made just over the tumor location af-ter careful inspection of the dorsal nerve rootlets. These dorsal nerve rootlets were attached to the spinal cord along a shallow vertical groove of the PLS, which naturally continued to the posterolateral tract of Lissauer. The crossing vessels were carefully coagulated at very low power under continuous saline irrigation. Myelotomy was extended from the rostral to the caudal side of the tumor by meticulously splaying the spinal cord tissue. Meticulous myelotomy along the left PLS revealed the rostral surface of the tumor ( Fig. 3B and C). The tumor-cord interface was well confirmed in the rostral part of the myelotomy; however, the tumor boundaries gradually became obscured at the caudal part of the myelotomy. Another myelotomy was performed on the opposite right side to identify the tumor-cord interface (Fig. 3D). The entire boundary of the tumor was confirmed from the bilateral opening of the myelotomy. Gentle dissection of the tumor-cord interface was continued in the longitudinal plane over the extent of the tumor, and, finally, gross total resection of the tumor was accomplished (Fig. 3E). The shape of the spinal cord was restored by suturing the pial edges together on both sides (Fig. 3F). The waveform of intraoperative neurophysiological monitoring at the end of surgery was almost the same as at the start of surgery. Postoperative course Pathological examination of the tumor revealed an atypical epithelial structure and glandular cavity-like arrangement, and a histopathological diagnosis of intramedullary metastatic adenocarcinoma was made. The postoperative course was uneventful. The patient showed satisfactory relief of buttock pain early after surgery, and her Neurol Med Chir (Tokyo) 62, April, 2022 gait and urination disorders gradually improved. She was eventually able to walk independently on flat ground. Although the recurrence of lung cancer was not clear, chemotherapy was resumed. Imaging examinations early and late after surgery revealed no tumor recurrence (Fig. 2H, I, J and K). Discussion Although the patient in this case experienced progressive neurological symptoms, the systemic metastasis of cancer was unclear. The possible diagnoses before surgery included not only SIM of lung cancer but also delayed radiation necrosis or radiation-induced tumors, such as cavernous malformations. 13,14) The PLS approach from the left side was first selected to remove the tumor as it appeared to deviate slightly to the left side. However, the left PLS approach was not sufficient to expose the tumor. Careful intraoperative assessment helped us to perform another myelotomy via the PLS on the right side. Consequently, a bilateral PLS approach was successfully applied. In general, SIM is very rare, with an estimated reported incidence of 0.2%-3.4% of all metastases. [1][2][3][4][5][6][7][8] When SIM is diagnosed, the patient usually has systemic metastasis, presumed to be cancer stage 4, and the surgical indication for SIM is usually highly limited. Goyal et al. retrospectively reviewed 70 patients with SIM treated in their institution between 1997 and 2016. 7) Only eight of these patients (11%) underwent surgery, with one patient receiving only a biopsy. The primary cancers included the lungs in five patients, prostate in one patient, kidney in one patient, and glioblastoma in one patient. Gross total resection was achieved in four of the eight patients (50%). The authors suggested that surgical management may contribute to the improvement of survival and neurological outcomes in selected patients, although the overall survival in patients with SIM remains poor. Gazzeri et al. performed a retrospective review of clinical data of 30 patients surgically treated for SIM 8) in which lung cancer constituted most of the primary malignancies. Eighteen of the 30 patients (60%) showed symptom improvement in terms of pain relief and partial recovery of motor and/or sensory deficits after surgery. The authors proposed that gross total resection with low morbidity must be the surgical target, and subtotal resection with adjuvant therapy was proposed as a valid therapeutic option in cases wherein gross total resection is not possible. The PLS approach is equivalent to DREZ myelotomy, which was originally developed for the selective destruction of the posterolateral aspect of the spinal cord, corresponding to the area where the dorsal nerve rootlets enter the spinal cord itself. 15,16) DREZ myelotomy selectively destroys Rexed laminae I through V and Lissauer's tract (dorsolateral tract) while preserving the adjacent dorsal and lateral funiculi. The use of the PLS approach for the removal of spinal intramedullary tumors may be much more destructive to the local area than DREZ myelotomy for the treatment of intractable pain. The advantages and disadvantages of the PLS approach can be compared with those of the standard midline approach via the PMS (Table 1). [9][10][11][12] A possible benefit of the PLS approach, compared with that of the PMS approach, is the reduced chance of neurologic deficits related to posterior funiculus damage and acceptable pain relief. However, the possible disadvantages of the PLS approach include the increased risk of neurologic deficits related to lateral funiculus damage, with possible damage to the corticospinal tract on the tumor side. The PLS approach may lead to partial damage of important spinal tracts, such as the corticospinal tract, the rubrospinal tract, and further laterally, the spinocerebellar tract. However, in this case, no serious neurological complications were noted after surgery. The patient demonstrated satisfactory recovery of neurological function early after surgery. Conclusion Although tumors showing an uneven location within the spinal cord may be an acceptable indication for the ipsilateral PLS approach, there is no clear agreement on the surgical indication for the bilateral PLS approach. In the present case, the bilateral approach was not planned before the surgery but was eventually performed after careful consideration during the intraoperative inspection. To the best of our knowledge, this is the first case in which a spinal intramedullary metastatic tumor was successfully removed using the bilateral PLS approach.
2022-02-17T16:05:58.166Z
2022-02-16T00:00:00.000
{ "year": 2022, "sha1": "467e51b52c1b4153d7ed30c8a49abe2c66851f33", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/nmc/62/4/62_2021-0321/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d4f63b51fabb775101a79b6cbbe69e52a687fec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233322545
pes2o/s2orc
v3-fos-license
Study the Effect of Gum Arabic and Triton X-100 on Stability and Thermal Conductivity of ZnO/ethylene glycol Nanofluids Stability of nanofluids is one of the most important factors to ensure the most benefit of the properties of nanoparticles. Zinc oxide was used in the research with concentration between (0.2-1) wt. % with ethylene glycol base fluid. The stability of ZnO nanofluid was enhanced by adding two types of surfactants Tx-100 and Gum Arabic with concentration of (0.1-0.5) vol. % to stabilize the ZnO nanoparticles in the base fluid. The results showed that the Gum Arabic surfactant led to more stable fluid than that of Tx-100; this was shown from zeta potential and UV spectroscopy measurements. The thermal conductivity coefficient was also measured, and the results showed that the thermal conductivity increased with adding surfactant than without using a stabilizer. Introduction Energy can be saved in different ways engineering operations. The need to enhance heat transfer of fluids has led the use of solid particles suspension as one of the most important methods because of their high thermal conductivity. However, these particles have few behavior problems such as sedimentation, fouling and pressure drop increment. There are many techniques used to overcome them and to increase the stability of these particles in many different base fluids. Different materials are used as solid particles suspended into the fluids, like metals (Al, Cu, Au, Fe, etc), oxides (Al2O3, TiO2, CuO, SiO2, Fe2O3), and different chemical compounds (AIN, CaCO3, SiC, graphene, etc). These particles are suspended in traditional solvents fluid like water, ethylene glycol, engine oil, propylene glycol and more. The nanoparticles need to have a large surface area, but the flow of such suspension through a narrowed channel (constricted channel) may become intermittent and may led even to clogging. Therefore, the nanofluid must be able to avoid this in the flowing and to increase thermal conductivity. Twostep method included the suspension of nanoparticles into the base fluids by employing various mechanical, physical and chemical mechanisms. Xie et al. [1] used such two-step methods to prepare Al2O3 with propylene glycol and ethylene glycol base fluids. The TiO2/water nanofluid was prepared by Murshed et al. [2]. The advantage of the method used to prepare nanofluids is that the nanoparticles have the ability to agglomerate during time due to high surface activity. Many studies were focused to about the different ways to increase the stability of nanofluids, including finding the optimum pH range, dispersal materials, and surfactants, shear mixing and ultra-sonic agitation. Different studies confirmed that the changing of pH meter or using different concentration to each the dispersants and nanoparticles in the base fluids. This is indicated to enhance the stability of nanofluids, which also affected to their property like, thermal conductivity and viscosity [3][4][5][6][7][8]. Hwang et al. [9] have used two types of nanoparticles carbon black and silver, suspended in water and silicon-oil base fluid respectively. By adding SDS (Sodium Dodecyl Sulfate) as a surfactant with 1 wt. % concentration, these prepared nanofluids showed the stability during 60 days. The semiconductor materials with great advantages were employed in many applications in optoelectronic devices, photo catalysts, cosmetics, pigments, ceramics, solar cells and sensors [10,11]. Chaudhuri and Paria [12] used sulfur nanoparticles in base fluid and added different surfactants to analyze the suspension stability. Other authors [13] applied UV-visible spectroscopy to study to study the stability of TiO2 nanoparticles in base fluid with different surfactants. Zaid et al. [14] investigated the effect of SDS surfactant on recovered oil with two types of nanoparticles ZnO and Al2O3. Also, we mention the researches of Anand and Siby [15] about the effect of four different surfactants (Triton Tx-100, polyethylene glycol (PEG 6000), cetyltrimethylammonium bromide (CTAB), SDS) added into a zinc oxide / water base fluid. Irwan et al. [16] have used Gum Arabic as surfactant to study the stability of Al2O3/ water nanofluid. Materials Zinc oxide nanoparticles (Aladdin Reagents Company, Shanghai, Chania) are used in this paper. There are two types of surfactants is added into nanofluids Tx-100 (Beijing GH Trading Co., ltd.) and Gum Arabic (Guangzhou Zio Chemical Co., ltd.) with ethylene glycol as solvent. Preparation of Nanofluid Zinc oxide nanoparticles are in the form of white powder with particle sizes of 5 nm. The ZnO nanoparticles are added to the ethylene glycol base fluid at different weight fraction (0.2-1) wt. %. The surfactant is added into the ZnO/ethylene glycol at different concentration (0.1-0.5) vol. %. The nanofluids prepared by mixing the solid nanoparticles with ethylene glycol using 400 rpm stirring. Then the nanofluids are stabilized by sonication (MTI Corporation made in USA, equipment) for 30 min to obtain a good stability of nanofluid prepared. Characterization Measurements Zeta potential of ZnO/ethylene glycol without/with surfactant nanofluids was determined by using equipment (Zeta plus USA made). UV-vis spectrometer type (Shimadzu UV-160) and a thermal property analyzer KD2 Pro (Decagon Device Corp., Pullman) were also used to measure the light absorbance and thermal conductivity, respectively. All the characterization measurements were performed at room temperature. Zeta potential Measuring the zeta potential of nanofluids with two types of surfactants Tx-100 and Gum Arabic (GA) is one of the most important method to study the mechanism of stability of ZnO/ ethylene glycol nanofluid. The zeta potential is measured during 60 days for different concentrations between (0.1-0.5) wt. % for Tx-100 and Arabic Gum. Figure 1 shows the stability (expressed by values of zeta potential) of ZnO/ ethylene glycol nanofluids without stabilizer. The results reveal that the nanofluid with less concentration of stability of ZnO is much more stable than the samples with high concentration of ZnO in the base fluid. This is happened due to the mechanism of Brownian motion at high concentration the when the ZnO nanoparticles tend to agglomerate and they are settled after short time, in agreement with Irwan et al. [16]. The obtained values are in the range of -30 to -40 mV for Tx-100 and in the range of -48 to -55 mV for GA. It is known that nanofluids are considered stable if the zeta potential ranges between (40-60) mV (either negative or positive values). Our results show therefore a higher stability for mixture with Gum Arabic (GA) than with Tx-100 for the same concentrations. We interpreted the fact that the Gum Arabic leads to a higher stability than Tx gum because of its ability to minimize the interactions between ZnO nanoparticles, as also stated by Pauzi et al. [17]. Figure 2. Zeta potential values of ZnO containing nanofluids with Gum Arabic and Tx-100 addition for different concentrations UV-Vis spectrophotometry The stability of prepared nanofluids ZnO/ ethylene glycol containing surfactants can also be simply illustrated by the absorbance of light using UV-Vis spectrophotometry just after their preparation. Figure 3 shows that the wavelength of absorption peaks for just prepared ZnO/ethylene glycol with GA and Tx at constant concentration of 0.5 vol.% is 370 nm and 335 nm, respectively, this being in agreement with the standard value of ZnO nanofluids as reported by Estrada-Urbina et al. [18]. Thermal conductivity coefficient The thermal conductivity represents the most important parameter factor to save thermal energy in any system [19]. We measured thermal conductivity coefficient of simple ZnO/ ethylene glycol mixtures of different concentrations, and then with addition of surfactants Gum Arabic and Tx-100. The results showed that the thermal conductivity coefficient of ZnO/ ethylene glycol simple suspension enhanced when zinc oxide nanoparticles fraction in the mixture increased as shown in Figure 5, this being in agreement with results of other authors [20][21][22] Figure 5. Thermal conductivity coefficient of simple ZnO/EG nanofluids at different ZnO concentrations Figure 6 shows the values of thermal conductivity coefficient with addition of surfactants Gum Arabic and Tx-100. The results revealed that this parameter of ZnO/EG with Gum Arabic stabilizer has higher values compared to the thermal conductivity of ZnO/EG containing Tx-100 surfactant. This behavior suggests a higher stability of ZnO/EG nanofluids with Gum Arabic. The results also agreed with values obtained by other authors [23][24][25]. Conclusions Taking into account that the stability of nanofluids with solid particles content is one of the most important factors to ensure the most benefit of their properties, the stability of prepared ZnO/EG nanofluids was measured by two ways, determining the zeta potential and wavelength of absorption peaks in UV-Vis spectra. The results revealed that Gum Arabic stabilizer showed larger stability compared to the Tx-100 in the ZnO/EG nanofluids. The determined values of thermal conductivity coefficient of prepared nanofluids with addition of surfactants indicated that the thermal conductivity may increase up to 35% compared to the ZnO/EG without surfactants.
2021-04-21T12:07:43.352Z
2021-02-03T00:00:00.000
{ "year": 2021, "sha1": "b1dfc2994bfb5b6c058e0869f80d22f8ad84962e", "oa_license": "CCBY", "oa_url": "https://revistadechimie.ro/pdf/2%20NOOR%20S%201%2021.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b1dfc2994bfb5b6c058e0869f80d22f8ad84962e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
25576292
pes2o/s2orc
v3-fos-license
Multicenter Prospective Trial of Stent Placement in Patients with Symptomatic High-Grade Intracranial Stenosis BACKGROUND AND PURPOSE: On the basis of the high 1-month stroke and/or death (14.7%) rates associated with stent placement in the Stenting versus Aggressive Medical Management for Preventing Recurrent Stroke in Intracranial Stenosis trial, modifications in patient selection and procedural aspects for intracranial stent placement have been recommended. We performed a multicenter prospective single-arm trial to determine whether such modifications would result in lower rates of periprocedural stroke and/or death. MATERIALS AND METHODS: The study enrolled patients with recent transient ischemic attack or ischemic stroke (excluding perforator ischemic events) related to high-grade (70%–99% in severity) stenosis of a major intracranial artery. Patients were treated by using angioplasty and self-expanding stents 3 weeks after the index ischemic event at 1 of the 10 high-volume centers in China. An independent neurologist ascertained the occurrence of any stroke and/or death within 1 month after the procedure. RESULTS: A total of 100 consecutive patients were recruited. The target lesions were located in the middle cerebral artery (M1) (n = 38, 38%), intracranial internal carotid artery (n = 17, 17%), intradural vertebral artery (n = 18, 18%), and basilar artery (n = 27, 27%). The technical success rate of stent deployment with residual stenosis of <50% was 100%. The overall 1-month stroke and/or death rate was 2% (95% confidence interval, 0.2%–7.0%). Two ischemic strokes occurred in the pontine region (perforator distribution) in patients following angioplasty and stent placement for basilar artery stenosis. CONCLUSIONS: The results of this prospective multicenter study demonstrated that modifications in patient selection and procedural aspects can substantially reduce the 1-month stroke and/or death rate following intracranial stent placement. I ntracranial atherosclerosis is an important cause of cerebral ischemia with a relatively high prevalence in Chinese patients. 1 The Chinese Intracranial Atherosclerosis study reported a prevalence of intracranial stenosis of 46.6% among 2864 consecutive Chinese patients with cerebral ischemia. 2 Patients with ischemic symptoms related to high-grade intracranial stenosis (70%-99%) have an almost 20% risk of recurrent stroke within 1 year despite antithrombotic treatment. 3 Therefore, intracranial angioplasty and stent placement have been recommended to reduce the rate of recurrent ischemic events. [4][5][6][7] However, the Stenting versus Aggressive Medical Management for Preventing Recurrent Stroke in Intracranial Stenosis (SAMMPRIS) trial 8 was prematurely terminated due to excessively high 1-month stroke and/or death rates in patients randomized to intracranial stent placement. At the time of the Data Safety Monitoring review, 14.7% of patients treated with angioplasty combined with stent placement experienced a stroke or died within 1 month after enrollment compared with 5.8% of patients treated with medical therapy alone, a highly significant difference. 1,9 The 1-month stroke and/or death rate was much higher than the 6.6%, 4.5%, and 6.5% rates in the prospective Stenting of Symptomatic Atherosclerotic Lesions in the Vertebral or Intracranial Arteries (SSYLVIA) study, 10 Wingspan study, 11 and Apollo Stent for Symptomatic Atherosclerotic Intracranial Stenosis (ASSIST) study, 12 respectively. Possible reasons for the disproportionately high rates of 1-month stroke and/or death included a very short time interval between the index ischemic event and the procedure, lack of stratification by ischemic event type, and less rigorous operator-experience requirements. 13 The Food and Drug Administration in March 2012 announced that the Wingspan stent system (Stryker Neurovascular, Kalamazoo, Michigan) continues to remain an option for patients with recurrent stroke despite medical management who have not had any new stroke symptoms within 7 days before treatment with the Wingspan. The decision was based on review of the SAMMPRIS trial and the clinical study data supporting humanitarian device exemption approval data, supplemented by the opinions of an advisory panel of experts. The manufacturer, Stryker Neurovascular, was also required to enhance its physician training program for the Wingspan stent. Another expert panel concluded that the SAMMPRIS trial data support modification but not discontinuation of the use of intracranial angioplasty and/or stent placement for intracranial stenosis. 13 The panel further recommended proceeding with another clinical trial with appropriate modifications in design based on lessons learned from the SAMMPRIS trial to avoid unnecessary elimination of a potentially beneficial treatment in appropriately selected patients. On the basis of the above-mentioned considerations, a multicenter prospective single-arm trial with independent outcome ascertainment was undertaken to determine whether such modifications will result in lower rates of periprocedural 1-month stroke and/or death in patients treated with intracranial stent placement. Patient and Site Selection The study was an investigator-initiated, government-funded, prospective, multicenter registration trial that was conducted at 10 clinical sites in China. Patients who had experienced a recent TIA or nondisabling ischemic stroke (modified Rankin Scale score, Յ2) caused by high-grade stenosis (70%-99% in severity) of a major intracranial artery (middle cerebral artery [M1], intracranial internal carotid artery, intradural vertebral artery, and basilar artery) were eligible. Conventional angiography was used to quantitate the severity of stenosis by using the Warfarin-Aspirin Symptomatic Intracranial Disease Study criterion. 3 Patients who had ischemic symptoms within the most recent 3 weeks were excluded. Patients with perforator strokes only were not considered candidates for stent placement. Here, perforator strokes due to perforator occlusion are defined as basal ganglia or brain stem/thalamus infarction related to middle cerebral artery or basilar artery stenosis. The inclusion and exclusion criteria for the trial are provided in Table 1 and On-line Table 1. This study is registered in the ClinicalTrials.gov with ID NCT01763320 (China Angioplasty and Stenting for Symptomatic Intracranial Severe Stenosis). The 10 participating sites were selected on the basis of the volume of procedures performed. At each site, the annual volume of intracranial angioplasty and stent placement procedures performed was Ͼ30 procedures for the past 3 years. At each site, the study team consisted of a neurologist, a neurosurgeon, a neuroradiologist, and a research coordinator. The study protocol was reviewed and approved by a central Data Safety Monitoring Board and subsequently by the local institutional review board. Each patient signed a written informed consent before the procedure. Relevant data were recorded on a standard case reporting form. Treatment Protocol The patients were placed on aspirin, 100 mg daily, and clopidogrel, 75 mg daily, for 3-5 consecutive days before the proce- neurosyphilis; any other intracranial infection; any intracranial stenosis associated with CSF pleocytosis; radiation-induced vasculopathy; fibromuscular dysplasia; sickle cell disease; neurofibromatosis; benign angiopathy of the central nervous system; postpartum angiopathy; suspected vasospastic process; and suspected recanalized embolus 2) Symptomatic intracranial stenosis: presenting with TIA or stroke within the past 12 months attributed to 70%-99% stenosis of a major intracranial artery (internal carotid artery, MCA ͓M1͔, vertebral artery, or basilar artery) 3) Degree of stenosis: 70%-99% severity confirmed by catheter angiography for enrollment in the trial 4) Remote infarctions on MRI were acceptable, which could be accounted for by the occlusion of the terminal cortical branches or hemodynamic compromise (perforator strokes excluded); perforator strokes due to perforator occlusion are defined as basal ganglia or brain stem/thalamus infarction related with middle cerebral artery or basilar artery stenosis 5) Expected ability to deliver the stent to the lesion 6) All patients should be treated beyond a duration of 3 weeks from the latest ischemic symptom onset 7) No recent infarctions identified on MRI (indicated as high signals on DWI series) at enrollment 8) No massive cerebral infarction (more than one-half of the MCA territory), intracranial hemorrhage, epidural or subdural hemorrhage, and intracranial brain tumor on CT or MRI 9) mRS score of Յ2 10) Target vessel reference diameter must be measured at 2.00-4.50 mm; target area of stenosis is Յ14 mm in length 11) No childbearing potential or has a test negative for pregnancy within the week prior to study procedure; female patients had normal menses in the past 18 months 12) Patient is willing and able to return for all follow-up visits required by the protocol 13) Patients understand the purpose and requirements of the study and have signed an informed consent form dure. The procedure was performed with the patient under general anesthesia in all except 1 case. The case was typically performed via a transfemorally placed 6F-long sheath or guiding catheter. The intracranial stenotic lesion was traversed by using a standard 0.014-inch microcatheter-microguidewire system by using high-magnification fluoroscopic road-mapping techniques. The microcatheter was exchanged over a 300-cm-long 0.014-inch microguidewire for a Gateway angioplasty balloon (Stryker Neurovascular). After angioplasty, the Gateway angioplasty balloon catheter was exchanged over the existing 0.014-inch microguidewire for a self-expanding nitinol Wingspan stent delivery system. In general, the Wingspan stent diameter was 0.5-1.0 mm greater than the target artery and was deployed to extend at least 3 mm on either side of the lesion. 11 The Wingspan was deployed across the lesion by using the standard technique of outer containing catheter withdrawal. If the residual stenosis after Wingspan stent deployment was Ͼ50% in severity, the study protocol allowed postdilation with a new angioplasty balloon catheter. Technical success was determined by successful placement of the stent across the lesion and residual stenosis of Ͻ50% on postprocedural angiography. Throughout the procedure, intravenous heparin boluses were given to maintain the activated clotting time between 250 and 300 seconds. The protocol required frequent measurements of blood pressure during the procedure and at least 1 measurement every half an hour during the next 24 hours while the patient was monitored in an intensive care unit. The systolic blood pressure was maintained between 100 and 120 mm Hg for 24 hours after the procedure. The patient was continued on aspirin, 100 mg daily, and clopidogrel, 75 mg daily, for the next 90 days and subsequently on aspirin alone. Concurrent risk-factor modification was undertaken, consisting of normalizing low-density lipoprotein (statins, target low-density lipoprotein of Ͻ2.58 mmol/L [100 mg/dL]), hypertension (systolic pressure of Ͻ140 mm Hg and diastolic pressure of Ͻ90 mm Hg), and glycemic status (in patients with diabetes, the hemoglobin A1c level was checked with a target level of Ͻ6.5%), and lifestyle modification. 14 End Point Definition and Ascertainment Primary end points included any stroke and/or death within 1 month. A stroke was defined as a sudden-onset neurologic deficit that persisted for at least 24 hours and could be ischemic or hemorrhagic in nature. Ischemic stroke was further defined as a new focal neurologic deficit that was not associated with an intracranial hemorrhage on brain CT or MR imaging. Hemorrhagic stroke was defined as parenchymal, subarachnoid, or intraventricular hemorrhage detected by CT or MR imaging that resulted in a stroke (as defined above) or seizure. The hemorrhage was classified as asymptomatic if symptoms or signs were temporary (lasted Ͻ24 hours) without any seizures. 15 Asymptomatic strokes were considered adverse events but were not included as primary end points. At each site, the site-designated neurologist who was not part of the treating team ascertained the clinical outcomes within the 1-month follow-up. The neurologist along with the study coordinator performed each follow-up visit and collected the data regarding study end points. Statistical Analysis The statistical methods used were predominantly descriptive. Continuous data were presented as means (with SDs), and categoric data were presented as percentages. For selected percentages, 95% confidence intervals were calculated by using the binomial (Clopper-Pearson) "exact" method. 16 RESULTS From July 2013 to March 2014, 10 participating sites evaluated 235 consecutive patients with symptomatic high-grade intracranial stenosis or occlusion (70%-100% in severity by angiography). Among them, 135 patients were finally excluded from the study because of the following aspects: 1) Patients did not meet the inclusion criteria; 2) refused to accept endovascular stent placement; and 3) had chronic occlusion of target major intracranial artery. A total of 100 intracranial lesions were treated in 100 enrolled patients (median age, 56 years; 73% were men) ( Table 2). All procedures were a combination of angioplasty followed by stent placement performed with the patient under general anesthesia (except 1 case) via the femoral approach. None of the patients required additional postdilation or Ͼ1 stent placement. Angioplasty and stent placement were performed in the following locations: 27 (27%) in the basilar artery, 17 (17%) in the intracranial internal carotid artery, 38 (38%) in the middle cerebral artery, and 18 (18%) in the intradural vertebral artery. The technical success rate was 100%. The mean severity of preprocedural stenosis was 82.7% Ϯ 8.9% and postprocedure stenosis was 13.5% Ϯ 10.2% ( Table 2). The overall 1-month stroke and/or death rate was 2% (95% confidence interval, 0.2%-7.0%). Ischemic stroke occurred in the distribution of the perforating arteries (pontine) in 2 patients (On-line Table 2), both of whom had undergone the procedure for high-grade basilar artery stenosis. Both of them had midpontine infarctions on the postprocedure MR imaging. One patient developed hemiparesis and ataxia within 24 hours after the procedure. The other patient developed hemiplegia and facial paral- DISCUSSION We observed a high technical success rate and low rate of 1-month stroke and/or death in patients with high-grade intracranial stenosis treated with intracranial stent placement within this prospective multicenter study. The study was designed after the completion of the SAMMPRIS trial and incorporated modifications in protocol from observations derived from trial results and subsequent expert recommendations. 8 Several factors may have contributed to the more favorable short-term results observed within the current study. Restriction of patient recruitment to high-volume centers and modifications in patient selection were probably important factors. In the SAMMPRIS trial, 220 procedures were performed in 50 sites in the United States during 29 months with an average of Ͻ2 procedures at each site per year. 8 Such recruitment patterns suggest that either familiarity with the protocol or even operator experience differed from that in our study, which treated 30 patients per year at each site on an average. To participate as an interventionalist in the SAMMPRIS trial, the operator was required to demonstrate previous experience with 20 intracranial angioplasty/stent procedures, of which at least 3 procedures were performed with the Wingspan or Neuroform stent system (Stryker Neurovascular). 8 In the current study, an annual volume of Ͼ30 intracranial stent procedures sustained during the past 3 years was required. Our results are comparable with the recent data (1month stroke and/or death rate of 4.4%-6.2%) derived from some high-volume centers (Ͼ100 cases per year). [17][18][19] A retrospective analysis of 96 patients treated with intracranial angioplasty and stent placement at 3 university-affiliated institutions in the United States reported that the overall 1-month stroke and/or death rate was 7.2% in the 69 SAMMPRIS-eligible group and 7.4% in the 27 SAMMPRIS-ineligible group. 20 The 30-day stroke and/or death rate was 3.3% and 10.2% in the SAMMPRIS-eligible, angioplasty-treated subgroup and the stent-treated subgroup, respectively. Patient selection, particularly exclusion of patients with recent ischemic events and those with perforating artery ischemic stroke (in specific contrast to SAMMPRIS) may have contributed to the favorable short-term results in our trial. Our trial recruited patients who had experienced an index ischemic event at least 3 weeks before recruitment, which is longer than the recommended 7-day interval (range, 7-19 days) in the SAMMPRIS trial. The longer time interval may have allowed plaque stabilization and spontaneous lysis of overlying thrombus and probably also reduced the risk of hemorrhagic transformation for patients with recent ischemic stroke (Ͻ3 weeks). 17,[21][22][23] An analysis of the National Institutes of Health Multicenter Wingspan Intracranial Stent Registry Study found that stent placement performed within 10 days of a qualifying ischemic event was associated with a higher rate of 30-day stroke and/or death compared with procedures performed after 10 days (8% versus 17%, P ϭ .06). 21,24 In the SAMMPRIS trial, the rates of ischemic stroke, symptomatic hemorrhagic stroke, or any death within 1 month were 15.7% and 13.8% in the patients enrolled within 7 days or after 7 days of their qualifying event, respectively. 15 Exclusion of patients with recent ischemic stroke may also exclude those with the highest risk of an ischemic event recurrence; therefore, the benefit of stent placement in the reduction of stroke recurrence may also be diminished. We included patients with distal hypoperfusion and/or cortical involvement. The exclusion of patients with perforating artery ischemic stroke may have reduced the occurrence of this type of stroke postprocedurally. 25 However, in a post hoc analysis of the SAMMPRIS trial, the rate of postprocedural ischemic events was not higher among those recruited due to perforating artery ischemic stroke (0%) compared with those with TIAs (8.9%) or nonperforating artery ischemic strokes (14.3%). 15 Certain procedure-related factors such as clopidogrel load (Ϸ10%) and poststent angioplasty (Ϸ10%) performed in the SAMMPRIS trial were avoided in the current study and may have made some contribution to the differences in adverse event rates. Two additional aspects that could improve the results of the intracranial angioplasty and stent placement are improvement in device design and point-of-care testing for assessing the magnitude of platelet inhibition with antiplatelet medication. Although the self-expandable Wingspan stent with the over-the-wire technique was widely adapted because of the relative ease of delivery over balloon-expandable stents with the rapid-exchange technique; however, the effectiveness of the self-expanding stent in restoring lumen diameter and preventing restenosis has been questioned. Although the radial force exerted by the Wingspan is superior to that of other self-expandable stents such as the Enterprise (Codman & Shurtleff, Raynham, Massachusetts) and Neuroform stents, it is not comparable with that of balloon-expanding stents. A new generation of balloon-expanding stents with a rapid-exchange platform may result in superior technical results. We did not perform point-of-care testing to guide antiplatelet treatment in our cohort of patients. Point-of-care testing was introduced because considerable differences can be observed within individuals in regard to platelet inhibition with the same doses of aspirin and clopidogrel. 26 Such assessment may allow the use of higher doses of clopidogrel and intravenous glycoprotein IIb/IIIa inhibitors among those with inadequate response (resistance) to standard doses of antiplatelet medication. The low rate of adverse events following intracranial angioplasty and stent placement in our trial raises the question of the superiority of such a procedure over intense medical treatment for high-grade symptomatic intracranial stenosis. Intensive medical therapy in the SAMMPRIS trial consisting of aspirin, 325 mg/day, for entire follow-up; clopidogrel, 75 mg per day for 90 days after enrollment; and aggressive risk-factor management (targeting blood pressure Ͻ130/80 mm Hg and low-density lipoprotein concentration of Ͻ70 mg/dL) had reduced the 30-day stroke and/or death rate to 5.8%, which was substantially lower than the estimated rate of 10.7% based on historical controls. 8 Chaudhry et al 27 reported that a Յ3.8% 1-month rate of stroke and/or death was required to achieve a 35% relative risk reduction of the primary end point (composite of 1-month stroke and/or death and ipsilateral stroke beyond 1 month) among the intracranial stent-treated group compared with the medically treated group at 1-year follow-up, as specified by the superiority threshold within the SAMMPRIS hypothesis. One of the limitations in our study was the restrictions posed by sample size. We provided the 95% confidence interval values to provide quantitative assessment of the precision of the estimate. Although the current 1-month stroke and/or death rates seen following intracranial stent placement are encouraging, our study does not provide any information regarding the long-term results in regard to both clinical events and restenosis. Based on the results of the current study, the China Angioplasty and Stenting for Symptomatic Intracranial Severe Stenosis trial has been initiated and is an ongoing, prospective, multicenter randomized trial, which is being conducted in 8 sites intending to recruit 380 subjects (stent placement, 190; medical treatment alone, 190). 28 The study aims to demonstrate a 10.7% absolute reduction in ipsilateral stroke and/or death during 12 months (assuming an event rate of 18% for medically treated patients 3 and 7.3% for stenttreated patients 19 ). The sample size provides 80% power with a 2-sided test at the 5% level of significance and provides a 20% rate of lost follow-up.
2017-07-31T01:51:17.499Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "ec27262b5e9301c8feaa2c09e894067049c6f329", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/ajnr/37/7/1275.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ec27262b5e9301c8feaa2c09e894067049c6f329", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17914211
pes2o/s2orc
v3-fos-license
Evaluation of Plasma miR-21 and miR-152 as Diagnostic Biomarkers for Common Types of Human Cancers Stable blood based miRNA species have allowed for the differentiation of patients with various types of cancer. Therefore, specific blood-based miRNA might be considered as a methodology which could be informative of the presence of cancer potentially from multiple distinct organ sites. Recently, miR-21 has been identified as an “oncomir” in various tumors while miR-152 as a tumor suppressor. In this study, we investigated whether circulating miR-21 and miR-152 can be used for early detection of lung cancer (LuCa), colorectal carcinoma (CRC), breast cancer (BrCa) and prostate cancer (PCa), with distinguishing cancer from various benign lesions on these organ sites. We measured the two miRNA levels by using real-time RT-PCR in plasma samples from a total of 204 cancer patients, 159 various benign lesions, and 228 normal subjects. We observed significantly elevated expression of miR-21 and miR-152 in LuCa, CRC, and BrCa when compared with normal controls. We also found upregulation of plasma miR-21 and miR-152 levels in patients with benign lesions of lung and breast, as compared to normal controls, respectively. No significant expression variation of the two miRNAs was observed in PCa or prostatic benign lesions as compared to healthy controls. Receiver operating characteristic (ROC) analyses revealed that miR-21 and/or miR-152 can discriminate LuCa, CRC and BrCa from normal controls. Our results suggest that plasma miR-21 and miR-152 may serve as non-specific noninvasive biomarkers for early screening of LuCa, CRC, and BrCa, but not PCa. Introduction Cancer is a severe health problem threatening human life in the United States and many other parts of the world. Among men in the United States, the three most common cancers are prostate, lung and bronchus, and colorectal cancers, which account for about 50% of all newly diagnosed cancers with prostate cancer (PCa) for 28% of incident cases alone 1 . The three most commonly diagnosed types of cancer among women are breast, lung and bronchus, and colorectal, accounting for 51% of estimated cancer Ivyspring International Publisher cases with breast cancer (BrCa) alone for 29% of all new cancer cases 1 . Early cancer diagnosis is one method of disease management, which could also improve patient stratification and therapy response prediction, and thus reduce the mortality. Blood-based protein biomarkers, such as carcinoembryonic antigen (CEA) for colorectal carcinoma (CRC) 2 and prostate specific antigen (PSA) for PCa 3 , have gained a lot of recognition with favorable disease outcome in the early diagnosis of these tumors. However, the use of these biomarkers in screening for early stages of PCa or CRC is still limited due to their low sensitivity and specificity as well as their inability to distinguish aggressive from indolent tumors, nor to distinguish benign or precancerous lesions from early-staged cancer 4,5 . Currently, there are no protein or other blood-based biomarkers are clinically available for early screening of BrCa or lung cancer (LuCa), and mammography or computed tomography (CT) are currently applied as the standard (or emerging) approach for BrCa and LuCa screening, respectively. Therefore, the development of novel and more sensitive biomarkers for early cancer diagnosis is needed to find supplement or complement approaches to the existing detection methods. MicroRNAs (miRNAs) are small (~22 nucleotide), non-coding RNA molecules that have proven to be post-transcriptional regulators of gene expression by translational repression or degradation of targeted transcripts in a wide variety of biological processes including cell fate specification, proliferation, cell death, energy metabolism, and tumorigenesis 6 . The mature miRNA molecules circulating in plasma/serum are very stable due to the miR-NA-Argonaute-protein complex 7,8 , which provides the most important advantage of the non-invasive approach with remarkable stability and repeatability of measurement 9 . Since circulating miRNAs were detected in blood, they have been reported as an emerging class of blood-based biomarkers with the potential to provide information about distinct tumor biology in individual patients 9 , and aberrant levels of different circulating miRNAs species have been demonstrated as potential diagnostic or prognostic markers in various types of cancer, including lung 10,11 , colorectal 12, 13 , prostate 14,15 , and breast cancer 16,17 . MicroRNA-21 (miR-21) is one of the first mi-croRNA to be described as an oncomir, which targets on multiple tumor suppressors like PTEN 18 , PDCD4 19 , p53 and TAp63 pathways 20 . Expression of miR-21 in tumor cells is up-regulated in a wide variety of solid tumors including prostate, colorectal, lung and breast cancer 21 , and elevated level of circulating miR-21 has been persistently observed in cancer patients from different populations by different researchers [21][22][23][24][25] . Furthermore, overexpression of serum or plasma miR-21 has been also found in other tumors including that of gastric 26 , biliary 27 , ovary 28 , cervix 29 , esophageal 30 , brain 31 , and liver 32 . These founding suggests that circulating miR-21 could be used as a useful blood-based diagnostic biomarker for various types of human cancers. However, the altered circulating miR-21 levels reported in these previous studies have mostly been observed by comparing miR-21 levels between cancer patients and healthy controls, with few comparing cancer patients with subjects of benign lesions. Thus, it still remains unclear whether circulating miR-21 can distinguish early-staged cancer patient form those subjects with benign lesions. Recently, downregulated expression of mi-croRNA-152 (miR-152) has been observed in various types of human cancer cell lines and tumor tissues including gastrointestinal cancers 33 , endometrial cancers 34 , ovarian cancer 35 , and hepatocellular carcinoma 36 , indicating that miR-152 might have the potential to act as a tumor suppressor in these tumors. Further studies have verified DNA methyltransferase 1 (DNMT1) as a target for miR-152 downregulation via a feedback mechanism of the CpG hypermethylation in the gene promoter in cancer cells [37][38][39] . In prostate cancer, miR-152 can target TGFα to inhibit PCa cell migration and invasion and also interact with ERBB3 to contribute to PCa progression 40,41 . A recent study has shown significantly differentiated level of circulating miR-152 in patients with non-small cell lung cancer (NSCLC), indicating it as a non-invasive biomarkers for prediction of recurrence in resectable (NSCLC) and the survival of squamous cell carcinoma patients 42 . An earlier study in urinary bladder cancer demonstrated that the RNA expression ratio of miRNA-126/miRNA-152 in urine sample enabled the detection of bladder cancer from urinary tract infections and healthy individuals 43 . However, no such study about the diagnostic value of plasma miR-NA-152 has been reported in prostate, breast, colon or other human cancers so far. In this study, to assess the diagnostic value of circulating miR-21 and miR-152 in cancer, we measured the level of plasma miR-21 and miR-152 in several cohorts of subjects including early staged cancer (I, II) patients for the four types of common cancers (breast, prostate, lung, and colon cancer), benign patients at the four organs, and healthy controls. Patient Cohorts Lung cancer: We enrolled a total of 143 plasma samples from 55 patients with non-small cell lung carcinoma (NSCLC), 35 patients with benign lesions and 53 random high risk controls at Rush University Medical Center (Chicago, IL) between 2004 and 2010. The early stage NSCLC patient inclusion criteria included the disease confined to the chest without evidence of distant metastases; no preoperative chemoor radiotherapy within 1 year of our initial blood sampling; and a minimum of 2 years of clinical follow-up data. Patients with benign lesions include participants with a range of non-neoplastic pulmonary disorders (e.g. granulomas, hamartomas, and inflammatory lesions) as indicated in low-dose computed tomography (LDCT) screen. High risk subjects serving as normal controls in this study was defined to be individuals aged 55 to 75 years, having a smoking history of more than 30 pack-years, and having quit less than 15 years before randomization. All benign participants and high-risk individuals were followed with annual LDCT and remained cancer-free for a minimum 2-year follow-up. Cancer, benign and control samples were approximately age-, race-, gender-, and smoking status-matched as much as possible. Demographic information for these patients and controls is listed in Table 1. All patient data were acquired with written formal consent and in absolute compliance with the institutional review board (IRB) at Rush University Medical Center. Breast cancer: Blood samples were obtained from The Cooperative Human Tissue Network (CHTN) Western Division and Southern Division, including 53 female sporadic breast cancer patients, 40 female patients with benign lesions, and 49 healthy females who served as normal controls. Cancer patient histopathology results were confirmed by surgical resection of the tumors and clinicohistopathological characteristics and tumor stage were assessed based on histobiopsy results. No preoperative chemotherapy or radiotherapy was applied to cancer patients included in this study. Breast benign lesions are defined as hyperplasia, fibroadenomas, cysts and some unspecified findings diagnosed at this organ. Control blood samples were collected from healthy women with no history of malignant diseases and no inflammatory conditions. All these cancer, benign and control samples were approximately age-and race-matched, as shown in Table 1. Rush University Medical Center IRB gave approval on the study with written consent for using all the subject information and biospecimens. Prostate cancer: Unrelated men were recruited between the years 2001 and 2006 from the Division of Urology at Howard University Hospital in Washington, DC. Incident sporadic prostate cancer cases (n=65), with no family history, were identified by a urologist within the division or study coordinator and confirmed by a review of medical records. Clinical characteristics including Gleason grade, PSA, tumor-node metastasis stage, age at diagnosis, and family history were obtained for all cases from medical records. All these PCa patients were not given preoperative chemo-or radiotherapy. Disease aggressiveness was defined as ''low'' (T category <T2c and/or Gleason grade <8) or "high" (T category _T2c and/or Gleason grade _8). Benign subjects (n=51) were recruited from men diagnosed with benign prostatic hyperplasia (BPH) or high-grade prostatic intraepithelial neoplasm (HGPIN) lesions but without prostatic cancer. Healthy control subjects (n=74) unrelated to the cases and matched for age. Individuals who were ever diagnosed with benign prostate hyperplasia and/or had a prostate-specific antigen (PSA) value >2.5 ng/ml were not included as controls. All participants were between 40 and 85 years of age. The Howard University IRB approved the study and written consent was obtained from all participants. Colorectal cancer: Patients with CRC had a diagnosis determined at colonoscopy and confirmed by final surgical pathology at Rush University Medical Center. Early staged (Stage I or II) CRC patients without evidence of nodal disease were included (n=31). No any preoperative chemotherapeutical treatment was given to all CRC patient participants. Individuals (n=33) with adenomatous polyps were defined as benign patients according to CRC screening. Healthy volunteers (n=52) matched to age and race were individuals who underwent CRC screening by colonoscopy that was negative for either adenomatous polyps or CRC. The study was approved by the IRB of the Rush University Medical Center. Plasma RNA extraction The peripheral blood samples were collected using EDTA-anticoagulative tubes and centrifuged at 4000 RPM for 10 min, followed by a 15 min high-speed centrifugation at 12,000 RPM to completely remove cell debris. The supernatant serum was stored at −80°C until analysis. Total RNA was extracted from 200 μl of plasma using Qiagen miR-Neasy Mini kit (Qiagen, Valencia, CA) according to the manufacturer's protocol. In brief, the plasma was mixed with QIAzol Lysis Reagent and chloroform. After centrifugation at 12,000g at 4°C for 15 min, the aqueous phase was transferred into another tube, and 1.5 volumes of absolute ethanol were added. The mixture was then applied to miRNeasy Mini kit columns, following by washing with RWT and RPE buffers. The RNAs were finally eluted in 40 μl of RNase-free water. For normalization of sample-to-sample variation during RNA extraction and as internal control, same amounts of 25 fmol a synthetic C. elegans miRNA-39 (Cel-miR-39) was added into each plasma mixture. Quantitative PCR MiRNAs were measured using Taqman miRNA assay kits (Applied Biosystems, USA) according to the manufacturer's protocol. Briefly, about 30 ng enriched RNA was reverse transcribed with a TaqMan mi-croRNA Reverse Transcription Kit (Applied Biosystems, USA) in a 15 µL reaction volume. Expression levels of miR-21 and miR-152 were quantified in triplicate by qRT-PCR using human TaqMan MicroRNA Assay Kits (Applied Biosystems, USA) on Eppendorf iplex 4 system (Eppendorf North America, Hauppauge, NY). Spiked-in Cel-miR-39 was used as a normalizer for plasma miRNA quantification. Statistical analysis The relative expression of miR-21 and miR-152 was analyzed by using the 2 − Δ Δ Ct method, as previously described 9 . The Mann-Whitney U test was used to compare the expression of plasma miRNAs between the different groups. The Younden index determined the threshold for the plasma miRNA concentrations. The correlation between clinicopathologic features and plasma miRNA levels was determined by Student's t-test or Χ 2 test. All tests were 2-sided and a P value less than 0.05 (95% CI) was considered statistically significant. Statistical analysis was performed using SPSS 16.0 software (SPSS Ltd., UK). Patient Population In this study a total of 591 participants were recruited, including 204 cancer patients (55 LuCa, 31 CRC, 53 BrCa and 65 PCa), 159 benign lesions (35 lung benign nodules, 33 advanced colon adenomas, 40 breast benign lesions and 51 BPHs/HGPINs), and 228 normal subjects (Table 1). For each type of cancer, only early-staged patients (stage I or II) were selected. The independent normal subjects were recruited separately and matched in race, sex and age for each type of cancer and benign lesions. The cohort of normal subjects for lung cancer was also described as a "high-risk" population, in which all the healthy subjects have had a smoking history of more than 30 pack-years and quit less than 15 years. Colorectal advanced adenomas, which is considered as precancerous lesion, was also included in the group of "benign lesions" in this study. Only matched healthy females and males were included in the cohort of normal subjects for BrCa and PCa, without significant difference in age or race when compared to the patient groups, respectively. Evaluation of plasma miR-21 and miR-152 expression level in cancer patients To evaluate the diagnostic value of oncomir miR-21 for the four common types of cancer, the levels of plasma miR-21 were measured in all 204 recruited cancer patients with LuCa, CRC, BrCa and PCa, and in the four independent normal control cohorts, as shown in Table 2 and Fig. 1. We found the mean relative level of plasma miR-21 was increased in LuCa patients with 2.39-fold change when compared to high-risk controls (p=1.07×10 -4 ) (Fig. 1a), and 1.92-fold change in BrCa patients with comparison to the matched healthy females (p=0.03) (Fig. 1c. We also observed a slight up-regulation of miR-21 in CRC (1.23-fold, Fig. 1b) and PCa patients (1.21-fold, Fig. 1d), with no significant differences when compared to healthy controls (Table 2). It has been previously shown that miR-152 functions as a tumor suppressor with evidence of decreased expression in various types of tumors [37][38][39] . To investigate whether plasma miR-152 is differentiated in the selected four common types of cancers, we measured the expression level of circulating miR-152 among cancer patients, benign individuals and healthy controls. Unexpectedly, we observed increased level of miR-152 in cancer patients of LuCa, CRC and BrCa when compared to normal controls (Figs. 1e-h). As shown in Table 2 and Fig. 1, significant up-regulated miR-152 expression was observed with 2.68-fold change in LuCa patients (p=1.52×10 -4 ) (Fig. 1e), 2.03-fold in CRC (p=0.02) (Fig. 1f), and 2.91-fold in BrCa (p=2.75×10 -3 ) (Fig. 1g), when compared to normal controls, respectively. Up-regulation of plasma miR-21 miR-152 expression in patients with benign lesions We also examined the plasma miR21 and miR-152 expression levels in the selected four groups of benign patients. We did not observe significant difference of plasma miR-21 expression in benign patients as compared to cancer patients in all four selected types of diseases ( Table 2, Figs. 1a-d). As compared with normal controls, significant higher level of miR-21 was observed in lung benign patients (p=4.42×10 -3 ), while no difference was observed between benign patients and normal individuals in the other three types of disease ( Compared to normal controls, elevated levels of miR-152 were also observed in lung benign patients with 1.98-fold change (p=0.02) (Fig. 1e), and in breast benign patients with 3.41-fold change (p=2.3×10 -4 ) (Fig. 1g), respectively. No significant difference of miR-152 expression was seen between prostate and colorectum benign patients and their matched normal subjects (Figs. 1f and 1h). Interestingly, we noticed significant difference of miR-152 in CRC patients and patients with adenomatous polyps (Table 2, Fig. 1f). The diagnostic value of miR-21 and miR-152 for cancer early detection To further evaluate the diagnostic efficiency of plasma miR-21 and miR-152 in discriminating patients with early-staged cancer from healthy individuals and benign patients, receiver-operator characteristic (ROC) analyses were applied in these different groups between which statistically significant expression level of plasma miR-21 or miR-152 was observed. As shown in Fig. 2, the differentiated miR-21 was found to be able to distinguish cancer patients with LuCa or BrCa from matched normal controls, with area under the receiver-operator characteristic (AUC) values of 0.701 for lung cancer (Fig. 2a) and 0.613 for breast cancer (Fig. 2c), respectively. Comparatively, miR-152 exhibited AUC values of 0.701 for lung cancer (Fig. 2b) and 0.687 for breast cancer (Fig. 2d) in the discrimination of cancer from normal individuals. Furthermore, miR-152 was also able to distinguish CRC patients from the matched healthy individuals with AUC of 0.539 (Fig. 2e). Neither miR-21 nor miR-152 was observed to be able to discriminate prostate cancer patients from the matched healthy controls in this presented study. With significantly differentiated level in cancer CRC patients as compared to patients with adenomatous polyps, plasma miR-152 was considered to be able to distinguish CRC from benign colorectal polyp lesions, with a slight AUC of 0.537 (Fig. 2f) exhibited in the cohort of 31 cancer patients. No such observation was obtained for miR-152 in the other three types of cancers when compared to benign patients. The expression level of plasma miR-21 was not significantly changed when compared between cancer vs. benign groups in all the four types of cancers. Discriminating efficiency of plasma miR-21 and miR-152 between cancer and non-cancer To further evaluate the diagnostic value of plasma miR-21 and miR-152 in distinguishing cancer from benign and normal individuals, we analyzed the expression level of these two miRNAs by re-grouping these subjects. We first grouped benign patients together with matched normal individuals as a "non-cancer" group, and then compared the expres- sion level of plasma miR-21 and miR-152 between cancer vs. non-cancer group for all the four types of cancers (Table 3). Significant higher level of plasma miR-21 was observed with 1.85-fold change in lung cancer patients as compared to non-cancer groups (p<0.001). The relative AUC for miR-21 in discrimination of lung cancer from non-cancer was 0.645 (Fig. 3a). However, no such significant alteration was found for miR-21 between cancer and non-cancer groups in the other three types of cancers (Table 3). On the other hand, the elevated expression level of miR-152 was observed significantly in cancer group when compared to non-cancer individuals with about 2-fold change (p<0.01) for both LuCa and CRC (Table 3), and the corresponding AUC values were 0.652 for LuCa and 0.537 for CRC, respectively (Figs. 3b and 3c). Since there was no significant difference of plasma miR21 and miR-152 between cancer groups and benign patient groups in almost the four types of cancers (except miR-152 in cancer vs benign in CRC) ( Table 2), we combined cancer patients and benign patients into a "disease" group, and compared the expression level of these two miRNAs between "disease" and normal controls. Significant differences were observed for both miR21 and miR-152 in lung and breast "disease" group (all p<0.05), but not in colorectal or prostate disease ( Table 3). The AUCs of miR-21 and miR-152 in distinguishing disease from normal were 0.682 and 0.674 for lung (Figs. 3d and 3e), 0.603 and 0.704 for breast (Figs. 3f and 3g), respectively. Discussion In this present study, we evaluated plasma miR-21 and mir-152 as potential diagnostic biomarkers for early detection of multiple types of cancer. Our results confirmed significantly elevated miR-21 level in lung cancer patients and breast patients, but not in CRC or PCa patients. Unexpectedly, we observed up-regulated expression of plasma miR-152 in lung, breast, and colorectal cancer. To our knowledge, this is the first report to reveal the association of plasma miR-152 level with multiple types of cancer. We also investigated the expression level of miR-21 and miR-152 in the four groups of patients with benign or precancerous lesions at one of lung, colorectum, breast or prostate sites. Our results indicated significant lower level of miR-152 in patients with adenomatous polyps as compared to CRC patients, but not in the other three benign patients groups. In addition, no significant difference of mir-21 expression was observed between benign patients and cancer patients in all the four selected types of cancer. Currently, published studies on circulating mi-croRNAs as cancer detection biomarkers, mostly focusing on a specific type of cancer, have identified a wide variety of microRNAs for multiple types of cancers. However, results from these studies may lead to misunderstanding of the identified miRNAs as potential biomarkers specific for a certain type of cancer. For example, miR-21 was such a miRNA candidate identified in many independent studies, in each of them it was suggested as a biomarker for a certain disease by the researcher. Therefore, simultaneous assessment of a cancer-related miRNA marker in various types of cancer and disease status could improve the sensitivity and specificity of this marker for cancer prediction. In this study, we evaluated miR-21 in the four most common types of cancer and the corresponding benign lesions, and our results suggest that miR-21 was a non-specific biomarker for early cancer screening, which was not able to distinguish malignancy from non-cancerous benign lesions. The oncomir miR-21 is believed to play a role in a wide variety of cancers, and high level of plasma or serum miR-21 has been reported in various cancers [21][22][23][24][25] . It is well known that miR-21 is a secreted miRNA which is derived from exosomes 44,45 . When tumors actively release exosomes into peripheral circulation, the miRNA molecule contained in exosomes would act as a diagnostic biomarker for cancer 44,45 . Interestingly, it is shown that postoperative reductions in circulating miR-21 levels occurred exclusively among cancer patients with potentially curative surgeries or chemotherapy treatment, indicating that circulating miR-21 would also serve as a long-term follow-up biomarker in cancer prognosis [46][47][48] . Moreover, miR-21 has also been shown to be involved in development of other human diseases, including heart disease with observation of significantly down-regulated miR-21 expression in acute myocardial infarcted areas 49,50 . We also observed higher expression level of plasma miR-21 in benign diseases in lung and breast in the cohorts of samples we used in this study. These observations altogether indicated that miR-21 may be involved in multiple types of diseases, inflammatory process, and responses to chemotherapeutical drugs, and therefore the circulating miR-21 may not be an ideal specific biomarker for the detection of cancer. MiR-152 is a member of the miR-148/152 family 33 . Like the other two members, miR-148a and miR-148b, miR-152 is involved in the growth and development of normal tissues, as well as in the genesis and development of disease 51 . It has been shown that miR-152 functions as a tumor suppressor, and its expression is decreased in various tumor tissues [33][34][35][36][37][38][39] . Surprisingly, increased level of plasma miR-152 was observed in lung, breast and colon cancer patients and in patients with lung or breast benign lesions, as compared to matched normal controls in this study ( Table 2 and Fig. 1). Unlike the oncomir miR-21, which is high expressed in both tumor tissues and in blood of cancer patients, this observation of increased miR-152 level in plasma samples from various cancer patients was unexpected because miR-152 is believed to be down-regulated in the tumor tissues. Actually, recent studies have supported the "inconsistency" of high circulating miR-152 level in cancer patients with several pieces of evidence in bladder cancer and lung cancer 42,52 . In lung cancer, plasma miR-152 expression level was indicated to significantly predict survival of squamous cell carcinoma, with a low plasma miR-152 level associated with poor disease-free survival of NSCLC patients 42 . However, a very recent study in bladder cancer revealed that high circulating miR-152 level was presented in bladder cancer patients which was independently associated with tumor recurrence of nonmuscle-invasive bladder cancer and worse recurrence-free survival 52 . As this is the first report on high circulating miR-152 level in early-staged cancer of lung, breast, colorectal, and prostate, further studies are needed to confirm in our findings in larger size of samples. Moreover, it will be worthy to further investigate the underlying mechanism of this non-coincidence of miR-152 high level in plasma and its low presence level in tumor tissues. In this present study, we observed slight to medium predictive efficiency of miR-21 and miR-152 in discrimination of LuCa, CRC, and BrCa from normal controls, with little valuable power in distinguishing cancer from benign lesions. We realized that the size of subjects including cancer patients, benign patients and normal controls were still small for each type of cancer, limiting the evaluation on miR-21 and miR-152 as predictive biomarker in early detection of cancer. However, we still believed that the nature of miR-21 and miR-152 expression in plasma which was revealed in the multiple cohorts of samples is the key point for that. Therefore, searching for more specific and powerful miRNA biomarkers to discriminate early cancer from precancerous lesions for these common types of cancer using novel strategies is still warranted. Conclusions In this study, we detected the plasma miR-21 and miR-152 expression level in patients with lung, breast, colorectal and prostate cancer, with comparison to that in matched normal controls and patients with benign lesions at the respective organ sites. Our results showed the increasing levels of plasma miR-152 and miR-21 in some types of cancer patients and in some patients with benign lesions, with limited predictive value in discriminating cancer from healthy controls and benign lesions. Further studies aiming at searching for more specific and sensitive miRNA biomarkers for early detection of each types of cancer, especially in distinguishing cancer from benign lesions, are still needed.
2016-05-07T08:25:50.666Z
2016-02-05T00:00:00.000
{ "year": 2016, "sha1": "9e47f8f68dbdf2bf28c7cce8d5246ff1a3fbea4f", "oa_license": "CCBYNC", "oa_url": "https://www.jcancer.org/v07p0490.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e47f8f68dbdf2bf28c7cce8d5246ff1a3fbea4f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249487685
pes2o/s2orc
v3-fos-license
Smart Product Marketing Strategy in a Cloud Service Wireless Network Based on SWOT Analysis , Introduction The cloud service provides virtualized computer resources through the cloud.As a type of internet-based computing, it provides various types of data through user requests (ondemand) to internet-connected virtualized computers [1].Physical computers are not employed.A "cloud" service not only provides user information (data) via network connection globally but also provides data in various service types [2].Cloud service providers (CSPs) try to build a marketplace where software can be easily used by their customers and try to acquire a wide variety of software developers.Software developers can achieve global sales at lower costs than conventional software distribution methods by registering their own software in their marketplaces without the need to build separate applications and distribution services [3].In addition, users can register the required cloud-based software without any time and space constraints.Distribution and technology support do not require separate written contracts or requests and are efficiently provided through the console screens of existing CSPs [4].In this study, people examine the status of marketplaces built by many CSPs and help software developers select a marketplace suitable for both their purpose of use and business model based on a strength, weakness, opportunities, and threats (SWOT) analysis. The study of software distribution channels and cloudbased marketplaces concentrated on market research and the distribution status of existing package software [5].Recently, artificial intelligence (AI) and machine learning technologies are operating in a cloud environment.For practical research analysis, AI is employed in various noninformation technology industries.The author researched distribution channels and the status of existing package software, and the researcher conducted empirical studies on factors affecting the behavioral intentions of individual users wanting to use SaaS [6].The author's study confirmed that SaaS provided by recent CSPs has drawn keen attention as a means to enable innovation in the software distribution paradigm, but not many users have actively used it [7].Regarding the value of the marketplace, the researcher showed that the latest applications-from enterprise software to mobile and social networking-are adopted and available through the cloud, enabling broader adoption and advanced functions [8].The study also found that the cloud creates a marketplace where more products participate in the service providing process and easily develop applications through reuse and aggregation of services and resources.User-specific items were classified as analytic hierarchy process (AHP) analysis and study requirements of specific user groups [9].The level of security required by cloud users and the need for management policies are emphasized and analyzed; the study enables users to standardize the criteria and requirements for choosing security services in a cloud marketplace.We conducted an analysis and case study of independent software vendors' specific software on cloud platforms and confirmed the need for policies and software that meet the requirements of specific industrial environments [10].While marketplace definition and analysis data have been reported in previous studies, this study directly analyses the marketplaces of major local and global CSPs.Discussion of practical registration methods and suggestions of major marketing strategies via SWOT analysis differentiates this study from previous ones [11].The objectives of the study are to help the customers in understanding the product marketing strategy in the market.This classification can be done by using artificial intelligence with WSN techniques. Literature Review A SWOT analysis is a method of assessing a system's strengths and weaknesses in order to submit them for review.SWOT analysis is most commonly used to evaluate the current situation of the market before entering it [12].There is a requirement to have a basic understanding of the cloud and its services before entering the cloud ecosystem.Because of this, we are attempting to evaluate cloud computing using the SWOT norm [13].During our research, people found a variety of viewpoints on cloud computing.Others, however, viewed it as a negative, or at the very least unfavorable, option.As a result, because of the wide range of perspectives, we decided to conduct a SWOT analysis on cloud computing.SWOT analysis is a well-known strategy formulation tool.The goal of a SWOT analysis is to determine a company's assets and liabilities as well as potential opportunities and dangers in the external environment.Following the identification of these elements, solutions are established that may build on the strengths, eradicate the flaws, take advantage of the opportunities, or counter the dangers.Internal and external assessments are used to identify the organization's strengths and shortcomings, as well as its opportunities and dangers [14].The following is an example of how to promote your catering services to local businesses for the forthcoming holidays.Location (strength) and opportunity (opportunity) are combined in this strategy (upcoming holidays).In order to overcome real or perceived weaknesses, it is necessary to take advantage of external possibilities [15].As an example, use social media and free samples at surrounding office buildings to advertise breakfast options and catering services at a low or no-cost price.It incorporates both strengths and weaknesses (holidays, ingredients, and small advertising budget).Make use of your strengths in order to avoid being damaged by threats.Use social media to highlight the quality of the ingredients and the fact that the company is a local institution.When compared to national chains in the area, local resources and a strong location are advantages (threats) [16].Recognize ways to strengthen weak points in order to avert danger.Participate in local fairs and festivals, farmers' markets, and volunteer opportunities to maintain a presence in the community.In this way, the threat posed by national chains can be avoided because these chances are low-cost while still building a strong local following.It incorporates flaws (no advertising budget) and threats (no advertising budget) (losing market share to local chains) [17].In order to build a plan and make the greatest use of resources, the SWOT analysis looks both internally (at strengths and weaknesses) and outward (at opportunities and threats).The organization has control over its strengths and weaknesses, but opportunities and threats are things that the company cannot control but should be aware of. In general, the nation's image and identity can be boosted, the environment preserved, the creative spirit nurtured, and social tolerance increased at all levels of society as a result of increased cross-cultural understanding with the support of creative economy.Indonesia's competitiveness and societal well-being are predicted to be enhanced by innovative economics till 2025 [18].Human resources (creative people) and scientific knowledge, including cultural legacy and technology, can be used to create value addition in the creative economy, which can therefore be defined as "creating value addition based on the idea that was created by human resources' creativity [19]."As a creative economic resource, the ability to generate or create something unique, to solve a problem, or do something different from the norm is the most important one (thinking outside the box).When an old invention is put to creative use, new ideas begin to take shape.A product or method that is better, more valuable, and more useful is the result of a creative process that incorporates existing inventions.However, in order for an innovation to be considered a work of art that has a specific purpose, it must be something that has never existed before.As a result, fostering the emergence of effective and competitive ideas necessitates a strong focus on creativity [20].The creative industry and the creative economy are intertwined, although the creative economy encompasses a broader range of activities.In the creative economy, the creative value chain, nurturing environment, market, and archiving are all interdependent.The term "creative economy" encompasses not only economic value contributed but also social, cultural, and environmental value added.The creative 2 Wireless Communications and Mobile Computing economy, in addition to enhancing Indonesia's competitiveness, can also improve the nation's quality of life.The creative economy, to which the term "creative industry" refers, includes the core creative industry as well as forward and backward linkage creative industries.A core creative industry is one in which the primary source of added value is the use of the individual's own creative abilities.Core creative industries require input from other industries in order to create value.The term "backward linkage creative industry" refers to industries that act as inputs to the core creative industries.Core creative industry output can also be used as input for other industries, which is referred to as the "forward linkage creative industry [21]."It may be concluded that the 15 creative industrial groups are interconnected, despite the fact that each of these industries has a unique set of industrial features.Value creation in the creative economy is facilitated by the creative industry.Aside from commercial transactions, social and cultural exchanges are also made in the process of creating creative value.Each creative industry has a unique creative value chain, which includes the development, production, distribution, and commercialization stages.By taking this information into consideration, "an industry which creates an output through the exploitation of creativity, skills, and individual talents to create value-added, employment, and an enhanced quality of life" is the definition given to the creative industry [22]. Boosting the competitiveness and dynamism of local creative enterprises can be done in three ways: encourage creative entrepreneurship by providing experienced business mentors from throughout the country and world, so that local entrepreneurs can become more competitive.Increase local, national, and worldwide creative entrepreneurship collaboration, cooperation, and partnership networks.All stakeholders are involved and professionally handled the development of business incubators.As a result of the foregoing analyses, business incubators were formed in a number of places throughout Java and Bali [23].There was a BCIC on the island of Bali (Bali Creative Industry Center). There was an MCF in the city of Malang (Malang Creative Fashion).An incubator is a formal environment designed to stimulate the growth and development of new and early-stage companies by increasing their opportunities for the acquisition of resources that are aimed at facilitating the development and commercialization of new products, new technologies, and new business models.Business incubation is also a social and management process that is aimed at promoting innovative product creation and commercialization, as well as the introduction of novel technology.Small-and medium-sized businesses (SMEs) can benefit from these initiatives, which provide services to help them get off the ground [24].As part of this, the programme provides incubation with startup and company planning, as well as consulting in all critical areas of business development, growth, and funding.As an incubator, it primarily supports the growth of new businesses by providing networking opportunities.Even while incubators and incubates are both internal (incubators and incubatees), they also support a wide range of external (local and international) networks.Increase the external networks, promote the startup creativ-ity in the selection of partners, and involve the startup in the incubator [25].The study focused on a smart product marketing strategy in a cloud service wireless network based on SWOT analysis. Materials and Methods The proposed model in Figure 1 uses the cloud service wireless network system in smart product marketing, and the process is analyzed using SWOT analysis.This type of marketing system is fully digital and uses trending technologies such as Internet technology, cloud computing, and wireless network communication systems.Products are launched online on websites and video sharing platforms and made to reach consumers via social media platforms.There is no physical launch in the markets, stores, or showrooms.The product will be available digitally.If the consumer likes the product after studying the information and specifications, they can order it.Here comes the use of wireless network systems like mobile communications.Product orders can be placed from customers' mobile phones or laptops.It reaches the marketer via the cloud and delivers the product to the consumer in the buying process.In some cases, products are manufactured on order, and in such cases, the marketer or dealer will inform the manufacturer about the order.Products are delivered to customers with the same ease and convenience as in the past.Cloud computing allows users to access various computing applications online, such as storing and accessing data, servers, software, databases, and networking.Likewise, cloud marketing also uses the above computing applications and market products through online applications, social media, web stores, and e-commerce platforms.The computing applications of the cloud are as follows. Cloud computing has already expanded its territory in the waters of Internet technology, as in Figure 2. It has been used by many top companies.There is no need for the user to download software and install it on their computer.Cloud computing efficiently provides these services.Since the usage of cloud computing has been increased, the cloud service 3 Wireless Communications and Mobile Computing wireless network systems have to be analyzed with the help of SWOT analysis.SWOT analysis finds out the strengths, weaknesses, opportunities, and threats of cloud computing systematically.SWOT analysis is a strategic planning and management tool used to assist an organization in identifying and evaluating its strengths, weaknesses, opportunities, and threats related to business competition or project planning.It is also termed "situational analysis."SWOT analysis assesses internal and external factors and gives detailed insights into future growth.It is always recommended that SWOT analysis be used as a pathfinder to get proper insights. The SWOT analysis for the proposed model is given in Figure 3, and the points are as follows: (1) Strengths: the whole proposed process itself is a costeffective model.In traditional marketing, products are manufactured according to need, rough population, consumer strength, potential to buy, economy, and other factors.Products are manufactured on a large scale and marketed through the dealership.Through our proposed system, we would introduce the products to the customers through an online platform.Customers can order and get it delivered.No unnecessary costs are needed in mass manufacturing, marketing, etc. Customers can get their products in a customized way.For example, say a customer needs a newly launched bike customized.The customer can customize the bike using the web tools available on the bike company's website before placing the order.Companies can manufacture products on account of bookings and orders.This will also be a cost-efficient model.The whole process of the proposed model itself is a time-saving model Wireless Communications and Mobile Computing Thus, these are all the strengths (S), weaknesses (W), opportunities (O), and threats (T) of smart product marketing in cloud service wireless networking systems. An initial estimate of the most important characteristics is available.As a result, the features that appear in the G p i ðsÞ optimal solution have been recognised, and each feature is balanced based on the number of times it appears in the jτ i ðsÞ ∞ j:jij β optimal solution.Characteristics that appear more frequently in the set of the best approaches are assigned a higher i value.If a feature i is enabled in all of the ∑ vϵ j p jτ v ðsÞ ∞ j:jŋ v j β best approaches, then vϵ J p .If, on the other hand, a feature is not enabled at all, i = 0. Using the probability G p i ðsÞ, a novel hybrid algorithm with feature j determines whether or not feature i is chosen described by where J p is the set of characteristics that can be included with the temporary solution; τ i and τ v are the analysis valuation and methodology desirability of the feature.The parameters τ i govern the relative value of the secretory value and the SPMCP algorithm.When the entire novel hybrid algorithm has discovered their first ever solution, the analysis demise among all components begins, and each ant p invests the amount of analysis described by such that SðrÞk is the showcase subset generated by ant p iteration but also js r jr shows the distance and ϕ is also the parameter that controls the relative importance of subset length and ranges from 0 to 1 following Any values less than the prespecified value could be included in the rule, which is referred to as the rule's minimal level instances.Once the ants have exploited all of the attributes, the rule-based process will be terminated.To generate rules, the ants employ a probabilistic model G ij , as shown in equation ( 4), from which they can select a parameter value. The novel hybrid method states the number ij of an optimizer which represents the effectiveness of this timespan in terms of the ability to enhance the rule's prediction accuracy for each term ij which can be presented to a current rule.The significance of ij in terms of time frame ij denotes the electronic structure associated with that time frame.As shown in equation ( 5), the result is estimated for every time frame ij. It increases the SK simplicity because a shorter rule is easier to comprehend than a longer one.The DR process begins after the ants have finished the rule design process.This process eliminates unnecessary successfully registered by SK + GR at each resulting in higher quality.The valuation enough to (F) lies is given by This method is used by the ðGK + SRÞ to discover both simplified and stronger classification rules.At first, those routes are administered with the same amount of analysis specified in The artificial WSOT transactions is represented as ð1 − ρÞτ ij ðs − 1Þ is the substance calculated through path inquiry, the nodes used by the current would be updated.It is necessary to τ ij ðs − 1Þ simulate analysis water loss at the same time.As a result, its iterative operative is executed in accordance with where M is the attribute that represents the evaporation analysis rate, defined in equation ( 7), and τ ij ðsÞ is the iterative approach unique identifier.Endpoints ∑ b i=1 ∑ a i j=1 τ ij ðs − 1Þ not yet used by the current rule, on either hand, would only experience analysis as shown in In particular, humans would jU R ðτÞdj 2 abðτÞ define the map's method of construction, and the associated τ ij 2 kdk 2 exact solution based on ∀ r0 ≥ 0, r ∈ C p semisupervised multiprocessing learning is derived in 6 Wireless Communications and Mobile All through this method of likelihood computation, the sophistication of the calculation grows dramatically as τ ij 1 kdk 2 the structures lengthen.The parameters of the model jU R ðτÞdj 2 abðτÞ ≤ τ ij 2 kdk 2 , ∀ r0 ≥ 0 are nearly hard to estimate on current hardware.The presence or absence of f p ðrÞ in such a sentence is solely determined (equation ( 11)) by the word preceding it in the WSOT framework. A sentence's similarity is determined solely by equation (12) which is −sentð∇ a x/ðj∇ a xj + tÞÞ the two or more words preceding the framework for SWOT: q i ðhÞ the student's language level objective represents the distinction between the learner's cognitive stage and the level of difficulty to learning materials.The learner's progress represents τ ij the distinction between the helps the audience understand enclosed within a learning resource and the knowledge notes the learner wants to acquire.The smaller the difference, the more closely the learning resource's expertise points match the learner's knowledge points.ðu, w ; A, ɸÞ-the optimization problem of expenditure with both educational materials represents the overall spending information among teaching materials calculated as in The principal function period of education G pp ðuÞ objectives clarify the difference in having to learn time required to complete instructional resources b i U i ðuÞ = B R UðuÞ and having to learn time consumption by As a direct consequence, deep learning methods use distributed normal data and consistency to local transcriptions.A SWOT is made up of several deep convolutional of the type T = BM ðTÞ that act on a p-dimensional input TðuÞ = ðT 1 ðuÞ, ⋯, T p ðuÞÞ by using a filtration institution ðwl, l0Þ, l = 1 ⋯ q, l0 = 1, ⋯, p as well as reasoning ψ multireliability described by Obtaining a q-dimensional outcome T ðuÞ = ðT 1ðuÞ, ⋯, T qðuÞÞ is yet another name for extracted features.Finally, equation ( 16) represents the outcome. Results and Discussion Dataset classification of no. of attributes, classes, and instances using a novel hybrid algorithm with the Smart Product Marketing Crisis Prediction (SPMCP) method is considered.It appears in the jτ i ðsÞ ∞ j:jij β optimal solution.Characteristics that appear more frequently in the set of best approaches are assigned a higher i value.If a feature i is enabled in all of the ∑ vϵ j p jτ v ðsÞ ∞ j:jŋ v j β best approaches, then vϵJ p .If, on the other hand, a feature is not enabled at all, i = 0. Using the probability G p i ðsÞ, a novel hybrid algorithm with feature j determines whether or not feature i is chosen described in equation ( 1) to be represented in Figure 4.The socially beneficial environment of a share smart product market environment is provided, in which a number of attributes, classes, and instances from various stock institutions participate in product market under strict supervision and regulation even by the smart product market; also, with the adaptability of attempting to solve decentralized control problems, an optimization technique can be applied in addressing capital market difficulties with an optimization result described in Table 1. Let the artificial WSOT transactions be ð1 − ρÞτ ij ðs − 1Þ which can be used as substance throughout path inquiry; the nodes used by the current would be updated.It is necessary to τ ij ðs − 1Þ simulate analysis of the loss at the same time.As a result, its iterative operation is executed in accordance with equation (8) retrieved in Figure 5 which could be subjected to technical indicators.This includes techniques for selecting features as well as other marketing instruments.We will use iteration-based product as examples, but these constructs can be classified based on the type of protection (refer to Table 2).The trading strategy is, in fact, far more common in resource and currency future markets, where traders are worried with relatively short-term market volatility. Its online storage service SWOT analysis is used to evaluate wireless network systems.A technically detailed study is performed by identifying its strengths (S), weaknesses (W), opportunities (O), and threats (T) while taking into account numerous factors.It makes use of cloud services in conjunction with various wireless network system technologies.An 7 Wireless Communications and Mobile Computing intelligent product is a device that communicates with its users over the Internet.The use of smart products is increasing by the day, which has resulted in higher production numbers by various companies.In this case, there is an issue in marketing all of the manufactured products in the face of 11)) by the word preceding it in the WSOT framework.Based on Figure 6, the substance associated for each solution fraction is degraded during the atomization step, where its flow rate is given by r.The rate of evaporation is critical in controlling the novel hybrid algorithm stability of exploration and production.If r is close to 1, then the analysis values used within the next iteration step are heavily reliant on the reasonable alternatives from the previous iteration, resulting in a search algorithm around all these solutions.For the result analysis for the smart product marketing SWOT in cloud service iteration using the novel hybrid algorithm, refer to Table 3. Relatively small qualities of r enable earlier iterations of novel hybrid algorithm strategies to affect the search strategy. The principal function period of education G pp ðuÞ objectives clarify the difference in having to learn time required to complete instructional resources b i U i ðuÞ = B R UðuÞ and having to learn time consumption retrieved in Figure 7.For effective feature selection and data categorization, the novel hybrid algorithm is used.The efficiency of the proposed novel hybrid algorithm with the Smart Product Marketing Crisis Prediction (SPMCP) method is validated using a sequence of data sets, and the results are compared to those of the state-of-the-art methods.Overall research and testing results show that a novel hybrid algorithm with the SPMCP method significantly improves classification accuracy known as feature selection techniques.The analytical values utilized in the following iteration phase are largely dependent on the reasonable alternatives from the previous iteration, resulting in a search process that revolves around all of these solutions.For the smart product marketing SWOT analysis in cloud service iteration using a novel hybrid algorithm, refer to Table 3.Because r is quite modest, early iterations of the unique hybrid algorithm strategies can influence the search strategy. The recommended novel hybrid algorithms compared with ACO have the best performance with the Smart Product Marketing Crisis Prediction (SPMCP) method applied to every smart product market to determine its marketing activity in real time (refer to Table 4). Conclusions Classification techniques are commonly used to categorize a set of metrics in product marketing strategy research.An organization's overall smart product marketing strategy is determined using historical data classification methodologies.To design an accurate Smart Product Marketing Crisis Prediction (SPMCP) technique, it is essential to find acceptable characteristics (features).Classifier performance can be improved by solving an "attribute selection problem," which is a common problem in machine learning.Unique hybrid algorithm-based feature extraction and data preprocessing are part of the SPMCP model's novel hybrid algorithm proposal used in this study.The study implemented the innovative hybrid algorithm with information to detect and categorize processes.The new hybrid algorithm extracts features and picks out the most useful subset of them.The results proved that the proposed model works better than the existing algorithms. ( 2 )Figure 1 : Figure 1: Architecture diagram of a smart product marketing strategy in cloud service wireless networks. Figure 2 :Figure 3 : Figure 2: Applications of cloud service wireless networks. Figure 4 : Figure 4: Dataset classification of no. of attributes, classes, and instances using a novel hybrid algorithm with the Smart Product Marketing Crisis Prediction (SPMCP) method. Figure 5 : Figure 5: Analysis for the smart product marketing SWOT in cloud service using the novel hybrid algorithm. Figure 6 : Figure 6: Smart product marketing SWOT in cloud service iteration using the novel hybrid algorithm. Table 1 : Dataset classification result analysis of no. of attributes, classes, and instances using a novel hybrid algorithm with the Smart Product Marketing Crisis Prediction (SPMCP) method. Table 2 : Result analysis for the smart product marketing SWOT in cloud service iteration using the novel hybrid algorithm. fierce competition.As a result, a marketing strategy must be developed to promote the products.Cloud-based services can be used to complete this marketing task.The sophistication of the calculation increases dramatically throughout this method of likelihood computation as τ ij 1 kdk 2 the structures lengthen.The parameters of the model jU R ðτÞdj 2 abðτÞ ≤ τ ij 2 kdk 2 , ∀ r0 ≥ 0 are nearly hard to estimate on current hardware.The presence or absence of f p ðrÞ in such a sentence is solely determined (equation ( Table 3 : Result analysis for the smart product marketing SWOT in cloud service iteration using the novel hybrid algorithm.
2022-06-09T15:06:13.932Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "1ef8f7b7c95abf29800d06c261dc862dc79506fa", "oa_license": null, "oa_url": "https://downloads.hindawi.com/journals/wcmc/2022/7539860.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "54646ecf925281a2744a92ebc9146c1cc089ce9f", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [] }
218505939
pes2o/s2orc
v3-fos-license
Emphysematous cholecystitis following routine colonoscopy Abstract Cholecystitis is a rare sequela of colonoscopy, the relationship between which has not yet been defined. This case study reviews a rural elderly patient who developed right upper quadrant pain following routine colonoscopy. He developed emphysematous cholecystitis, which required laparoscopy with conversion to open via Kocher’s incision and underwent a subtotal cholecystectomy due to the severity of necrosis and inflammation. He had an uncomplicated recovery. Colonoscopy is an important diagnostic procedure, the most common complications of which are haemorrhage and perforation. There are less than 10 cases of associated cholecystitis and no reports of emphysematous cholecystitis. The hypothesized pathogenesis is dehydration and lithogenesis associated with traumatic translocation of organisms, however, no definitive correlation has been determined. Due to the potential health impact, cholecystitis cannot be excluded regarding post-colonoscopy abdominal pain, however, the correlation between procedure and pathology remains unclear. INTRODUCTION Cholecystitis is a rare sequela of colonoscopy, the relationship between which has not yet been defined. This case reviews an elderly patient who developed abdominal pain following a routine colonoscopy, which was diagnosed as emphysematous cholecystitis and required surgical intervention. The theorized pathogenesis of post-colonoscopy cholecystitis is dehydration and lithogenesis concurrent with traumatic bacterial translocation associated with colonoscopy, however, no definitive relationship has been determined to date. CASE REPORT A 72-year-old man, with a history of type 2 diabetes mellitus, underwent a colonoscopy with standard bowel preparation following a positive faecal occult blood test. He presented to the emergency department 2 days later with increasingly severe right upper quadrant abdominal pain. On questioning, Published by Oxford University Press and JSCR Publishing Ltd. All rights reserved. © The Author(s) 2020. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com he stated that the abdominal pain had started 3 h following his colonoscopy but he failed to present to hospital earlier as he lived on a rural property more than 2 h away from the nearest medical assistance. On presentation to the emergency department, the patient was tachycardic and febrile and his serology showed a neutrophilia, elevated CRP and bilirubin with normal hepatic transaminases. On examination, he was focally tender with voluntary guarding in the right upper quadrant. A CT abdomen was performed which identified a distended gallbladder, with adjacent gas locules and fat stranding around the hepatic flexure. The differentials included a contained microperforation at the hepatic flexure or emphysematous cholecystitis. The patient was haemodynamically stable and therefore a period of non-operative treatment was pursued. The patient was commenced on IV ampicillin, metronidazole and gentamicin, strict fluid balance, nil by mouth with regular clinical reviews. On Day 3, the patient had shown minimal clinical improvement C. Campbell et al. and a progress CT was obtained which revealed extensive gas within the gallbladder wall and adjacent-free fluid consistent with emphysematous cholecystitis (Figs 1 and 2). The patient proceeded to theatre-at laparoscopy, the omentum was wrapped and densely adherent over a firm, thickened and palpable gallbladder. It was unsafe to proceed laparoscopically and the decision was made to convert to a laparotomy. A Kocher's incision was performed, which revealed a necrotic, gangrenous gallbladder with surrounding seropurulent fluid in the right upper quadrant (Fig. 3). Due to necrosis and oedema of the surrounding anatomy, the cystic duct could not be confidently identified. A subtotal cholecystectomy was completed with an EndoGIA 45 mm articulating stapler used to divide the gallbladder at Hartmann's pouch. The patient developed a lateral surgical site infection with a superficial wound collection on Day 7, which was drained on the ward and settled with antibiotics prior to discharge. Microbiology of intraoperative swabs grew profuse extended spectrum beta-lactamase-producing Escherichia coli. Histopathology of the specimen reported florid acute on chronic inflammatory changes with extensive mucosal ulceration, oedema and transmural migration of inflammatory cells through the gallbladder wall. This was associated with full thickness necrosis and resulting in perforation at the neck of gallbladder and peritonitis. DISCUSSION Colonoscopy is an important diagnostic and therapeutic procedure. While it is well tolerated, it is not without risk. Colonic perforation and haemorrhage are the most common complications with an incidence of ∼0.1% [1]. Other documented rare complications include splenic capsular tear, appendicitis, retroperitoneal haemorrhage and an intramural caecal haematoma [2]. In regards to acute cholecystitis following colonoscopy, less than 10 cases have been described in the literature previously [3], with no reports of emphysematous cholecystitis. Emphysematous cholecystitis is an uncommon variant of cholecystitis. Risk factors include male sex, older age and diabetes mellitus. The most common organisms involved being Clostridia welchii/perfringens and E. coli [4]. The exact mechanism by which acute cholecystitis occurs after colonoscopy is not well established, though it is proposed that the driving factor is the dehydration which develops following bowel preparation and fasting. Dehydration causes the bile to become more lithogenic, with diminished bile flow and bile stasis leading to gallbladder distention [5]. The mechanical manipulation of the colon and the associated bacterial translocation may also play a role in the secondary infection of the gallbladder with enteric pathogens. Whether cholecystitis following a colonoscopy is due to a chance occurrence of a common condition after a common procedure or whether there is indeed a direct causal relationship between the two remains up for debate. Nevertheless, clinicians should include cholecystitis as a differential diagnosis of acute abdominal pain post colonoscopy. DISCLOSURE There are no potential conflicts of interest on the part of any named author.
2020-03-26T10:18:17.633Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "6f7a007b0249111d9ef3e8d311e6b49109c664a7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/jscr/rjaa091", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "696304b7343fb3140ff3ce0745df643042166994", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221465157
pes2o/s2orc
v3-fos-license
Molecular characterization of piezotolerant and stress‐resistant mutants of Staphylococcus aureus In the previous work, following a pressure treatment with wild‐type Staphylococcus aureus, we obtained piezotolerant isolates showing altered phenotypic characteristics. This work focuses on understanding the genetic background of their altered phenotype. Introduction Staphylococcus aureus is a Gram-positive, facultatively anaerobic, cluster-forming, non-motile coccus which is the leading cause of nosocomial infections (Tong et al. 2015). Normally these possibly lethal infections occur upon a transition from colonization to infection where the bacterium can gain access to the blood stream and translocate to different parts of the body, where it can grow and cause further damage. There is also a variety of human diseases caused by S. aureus (Tong et al. 2015). One of these is a food-borne disease occurring through growth in foods where it can produce enterotoxins (Tassou et al. 2008). Food-borne S. aureus disease occurs normally through the contamination of food by a food handler as a large proportion of humans are S. aureus carriers (Zeaki et al. 2019). If this food is kept at normal or higher temperatures, the micro-organism can grow and produce toxins which are consumed together with the food and cause a food-borne illness characterized by diarrhoea, vomiting, nausea and abdominal cramps (Guidi et al. 2018). One of the novel methods for the inactivation of S. aureus and other bacteria in ready-to-eat foods is high hydrostatic pressure (HHP; Karatzas et al. 2007). Following an initial slow adoption by the food industry, the last decade has seen a major increase in the number of food products processed by HHP entering the market and many companies adopting the method. Despite the increasing usage of HHP there is still concern about piezotolerant strains that have been described previously and can resist significant levels of pressure. Study of these strains can provide us with insights on how micro-organisms resist such pressure levels and it will detail the mode of action of HHP (Karatzas et al. 2003(Karatzas et al. , 2005. This knowledge could be used to minimize HHP treatment intensity which can translate in lower costs or it could resolve problems with tailing effects where such strains persist HHP treatment and remain in the food product, posing a threat to the customers. Previously, we identified piezotolerant S. aureus variants that derived from a clonal population and could survive high levels of HHP (Karatzas et al. 2007). These variants comprised 9 out of 21 isolated survivors (52% of the isolates), following a HHP treatment of 400 MPa for 30 min. They all showed a small colony phenotype, increased thermotolerance, impaired growth and reduced antibiotic resistance compared to the wild type (WT). They also showed weaker agglutination reactions, they were defective in the production of the typical S. aureus golden colour and showed lower invasion to intestinal epithelial cells than the WT. In an attempt to identify the mechanism causing this significant change in phenotype and based on our previous experience with piezotolerant mutants in Listeria monocytogenes having similar phenotypic changes that were attributed to mutations in ctsR gene, we analysed their ctsR and hrcA genes, but we found no mutations (Karatzas et al. 2007). In the current work, we aimed to identify the mechanism behind the piezotolerance of these mutants. We carried out further analysis of one of these piezotolerant mutants (AK23) isolated following an HHP treatment of a culture of the parent strain and performed a transcriptomic and genetic analysis. Additional genetic analysis of a specific locus was also carried out for the previously isolated piezotolerant isolates. Bacterial strains Previously, a WT methicillin-susceptible strain of S. aureus isolated from ham (National Agricultural Research Foundation, Lycovrissi, Greece) was used as WT for HHP experiments. Following an HHP treatment of the WT, 21 survivors were isolated randomly, analysed and described previously (Karatzas et al. 2007) were used in the current study. One additional representative piezotolerant isolate (AK23) that was isolated together with the latter ones, but was not analysed previously was also used. All isolates were derived from the same parent WT strain. Due technical problems with the recovery of all other isolate stocks from the freezer, work proceeded with AK23 which, however, needed to be verified for its phenotypic characteristics. Subsequently, through detailed thorough work all other isolates were recovered but after all molecular work was completed with AK23. The WT methicillin-susceptible S. aureus strain was resistant to ampicillin, amoxicillin, penicillin, nalidixic acid and sulfonamides. Stock cultures were kept at À80°C in 15% (vol/vol) glycerol or microbank tubes (Pro-Lab Diagnostics, Neston, Wirral, UK), transferred into sterile brain heart infusion (BHI) broth (Oxoid, Hampshire, UK) with or without 5% Tween 80 (ICI Surfactants, Wilmington, DE) and incubated twice at 37°C overnight (0Á3% (vol./vol.) inoculum). Cultures were grown with shaking (160 rev min À1 ), and Tween 80 was added only to those that would be subjected to HHP or heat treatment to alleviate the clumping of bacterial cells. Colony morphology Stationary-phase cultures of the WT and AK23 were grown as described above and passaged on Baird Parker agar, BHI agar, Columbia blood agar (COLBA) and plate count agar plates (Oxoid). Plates were incubated at 37°C for 24 h, and colonies were examined for their size, colour and haemolysis characteristics on COLBA plates. Analysis of growth kinetics Growth characteristics of WT and AK23 were assessed at 37°C with shaking (160 rev min À1 ) or at 20°C under static conditions. Five microlitres from each culture was inoculated into 200 ml fresh BHI. Samples were placed onto a Sero-Well microtitre plate (Sterilin, Staffordshire, UK), and bacterial growth was assessed by measuring the optical density at 600 nm (OD 600nm ) of the samples with a Bio-Rad model 680 microplate reader (Bio-Rad, Hercules, CA). Growth curves were constructed in triplicate. Doubling times were calculated based on values obtained from the exponential phase of growth. needed for agglutination for the AK23 isolate and the WT were recorded. Treatment Cultures were placed in sterile plastic stomacher bags (Seward, London, UK) that were sealed while avoiding an excess of air bubbles. Pouches were submerged in glycol (Resato, Roden, The Netherlands), which was the fluid pressure medium, and subjected to 450 MPa for 15 min in a HHP unit (Resato) at 20°C. The viability of S. aureus cells was determined before and after the pressure treatment. Decimal dilutions of samples in saline solution (Oxoid) were prepared, followed by plating in triplicate onto BHI agar (1% (wt./vol.) agar). Plates were incubated at 37°C for 5 days. Analysis of thermotolerance BHI broth (100 ml) was inoculated (0Á1%, vol./vol.) with overnight cultures of the strains incubated with shaking (160 rev min À1 ) at 37°C. Cultures were grown until midexponential phase (OD 600nm between 0Á4 and 0Á6), and samples were placed in 7-ml plastic tubes (Sterilin, Staffordshire, UK) and incubated in a water bath at 58°C for 20 min. Samples were taken before and after treatment and decimal dilutions in saline solution were prepared using saline tablets (Oxoid), and viability was determined. Gentamicin protection assay The gentamicin protection assay was performed for the WT strain and AK23 as described previously (Elsinghorst 1994). In brief, Caco-2 human colon adenocarcinoma cells (European Collection of Cell Cultures number 86010202) were maintained in Dulbecco's modified Eagle's medium containing 2 mM glutamine, 1% nonessential amino acids and 10% foetal bovine serum (DMEM; Sigma-Aldrich Poole, UK). Penicillin or streptomycin (Invitrogen, Paisley, UK) was used at a concentration of 100 U ml À1 in DMEM until 3 days before cells were used for infection studies. The culture medium was changed every 2-3 days. Just before experiments, Caco-2 cells were washed thrice with sterile phosphate-buffered saline (PBS), and subsequently, 2 ml of antibiotic-free DMEM was added to each well. The OD 600nm of stationary-phase cultures of S. aureus AK23 and WT were measured, and all cultures were adjusted to similar OD 600nm . We previously confirmed that there was a good correlation between OD 600nm measurements and cell numbers for both strains, as assessed by comparing CFU and OD 600nm values. Fifty microlitres of the adjusted bacterial suspensions were added to the wells, resulting in 10 6 CFU per well and yielding an estimated ratio of 2Á5: 1 of bacteria to cells (multiplicity of infection). Cells were incubated for 1 h at 37°C and subsequently washed twice with PBS and suspended in 2 ml of PBS containing 100 g ml À1 gentamicin. After 2 h at 37°C, cells from cultures exposed to gentamicin were rinsed twice with sterile PBS and lysed with 2 ml 1% Triton X-100 (vol/vol) in PBS. Following a brief incubation for 5 min at 37°C, cell lysates were serially diluted and plated onto COLBA to quantify the number of intracellular bacteria. Antibiotic disk diffusion test The antibiotic disk diffusion test was conducted according to the recommendations of the British Society for Antimicrobial Chemotherapy (Andrews 2005). In brief, cells from four morphologically similar colonies of each strain grown overnight on Iso-Sensitest agar plates (ISA; Oxoid) were transferred with a sterile loop into Iso-Sensitest broth (ISB; Oxoid) and incubated with shaking at 37°C until the turbidity was equal to the 0.5 McFarland standard. Subsequently, cells from the suspension were transferred with a sterile cotton-wool swab onto ISA spread evenly over the entire surface of the plate by swabbing in three directions. Immediately, after swabbing, antibiotic disks (Oxoid) were applied on the plate and following incubation at 37°C for 18 h, and zones of inhibition were measured. Determination of MIC The MICs of a range of antibiotics were determined using the broth doubling dilution method according to the recommendations of the British Society for Antimicrobial Chemotherapy (Andrews 2001). Overnight cultures of strains were prepared in ISB at 37°C and diluted to 0.5 McFarland. Kanamycin, gentamicin and nalidixic acid were purchased from Sigma-Aldrich (Poole, UK). Stability of phenotype AK23 isolate was assessed for the stability of its phenotypes. This isolate was subcultured for 10 consecutive days using 0Á3% (vol./vol.) inocula in fresh BHI medium. On day 10, the culture was inoculated using a 0Á3% (vol./ vol.) inoculum in 100 ml of BHI broth, incubated at 37°C with shaking (160 rev min À1 ), and tested for its growth characteristics, colony morphology and piezotolerance as described above. Sequencing of deleted region In previous work we attempted to investigate the source of the altered phenotype of the piezotolerant isolates by looking at possible mutations in the ctsR and hrcA, based on previous accounts of the involvement of these genes in piezotolerance and stress resistance (Karatzas et al. 2003(Karatzas et al. , 2005 but no mutations were found (Karatzas et al. 2007). In this work, we have investigated the possibility that isolates could possess a major mutation which was indicated by genomic indexing and microarray transcriptomics as both techniques showed no hybridization with all genes from SAR0666-SAR0673. Therefore, we designed primers DelFor (TATCGGGCGAATTGCT-TATC) and DelRev (GGGAAATGTAGCGATGATGC) that could amplify this specific region. We were sure that these primers would work since we designed them in areas that showed hybridization with the genomic DNA of the isolate. DelFor was designed in the intragenic region between SAR0664 and SAR0665 and DelRev within the SAR0674 gene. Chromosomal DNA from WT and all piezotolerant isolates described previously (Karatzas et al. 2007) and AK23 were isolated as described by Pospiech and Neumann (1995) and used as PCR templates. All primers were provided by Eurofins Genomics (Ebersberg, Germany), and amplification and sequence analysis were performed in duplicate by Eurofins Genomics. Isolation for genomic indexing Since the parent strain and AK23 used in these experiments were not previously typed, we performed genomic indexing with the use of genomic DNA from the parent strain and the microarray SAM-62. This procedure was essential to assess the level of hybridization between the genes present in the parent strain and on the microarray chip. To isolate DNA, cells were grown overnight (18 h) at 37°C and subsequently, 7 ml from this culture was taken and centrifuged at 3500 g for 5 min at 4°C. The pellet was washed twice in 1 ml PBS and finally it was resuspended in 0Á5 ml PBS and kept on ice. Subsequently, cells were transferred in SETS II tube system (Roche Diagnostics Ltd, Burgess Hill, UK), followed by shaking for 30 s at 6000 rev min À1 in a MagNA Lyser device (Roche Products Ltd, Welwyn Garden City, UK). The lysate was centrifuged for 2 min at 14 000 rev min À1 and 400-500 ll of the supernatant was transferred to a clean Eppendorf tube followed by 200 ll of ethanol (96%). This cell lysate was then used following the instructions of Qiagen DNA Minikit (Manchester, UK) from step 4. Subsequently, the DNA was eluted in 80 ll DNA/RNA-free water, its concentration was measured in a NanoDrop TM 1000 Spectrophotometer (Thermo Scientific, Wilmington, DE) and subjected to hybridization with the microarray hybridization protocol. Isolation and slide hybridization Overnight cultures were diluted 100 times in fresh prewarmed brain-heart infusion (BHI) broth and grown at 37°C in 5% CO 2 until an OD 590nm of 0Á5 was reached. A volume of 30°ml of the culture was pelleted and suspended in 1°ml RNAPro Solution (Qbiogene Inc. Heidelberg, Germany). Staphylococcus aureus RNA was isolated using the Fas-tRNA Pro Blue Kit according to the manufacturer's instructions (Qbiogene Inc. Heidelberg, Germany) using the Fastprep FP120 instrument (two cycles of 45 s at a speed setting of 6Á0; Qbiogene). After isolation, the RNA was treated with 6 U TURBO DNase (Ambion, Austin, TX) according to the manufacturer's instructions. Hybridization probes were generated from 2 µg total RNA of each strain according to the protocol of the Bacterial Microarray Group (BuG@s; St. George's Hospital Medical School, London, UK). RNA was mixed with 3 µg random primers (Invitrogen, Breda, The Netherlands), heat denatured and snap-cooled on ice. The RNA was reverse transcribed to cDNA to incorporate the Cy5 dCTP (GE Healthcare, Diegem, Belgium) or Cy3 dCTP (GE Healthcare). Labelled cDNA samples were pooled and hybridized overnight to a S. aureus microarray with PCR amplicons printed on Ultragaps (Corning, NY) glass slides (BuG@S; Witney et al. 2005). Microarray slides were prehybridized in 3Á5 9 SSC (1 9 SSC is 0Á15 mol l À1 NaCl plus 0Á015 mol l À1 sodium citrate)-0Á1% sodium dodecyl sulfate (SDS)-10 mg ml À1 bovine serum albumin at 65°C for 20 min before a 1-min wash in distilled water and a subsequent 1-min wash in isopropanol. Each Cy3-labelled test strain cDNA was mixed with an equal amount of Cy5-labelled reference strain cDNA, purified using a MiniElute kit (Qiagen, Manchester, UK), denatured and mixed to achieve a final 45 ll hybridization solution of 4 9 SSC-0Á3% SDS. Using two 22 9 22 mm LifterSlips (Erie Scientific, Portshmouth, NH), the microarray was sealed in a humidified hybridization chamber (Telechem International, Sunnyvale, CA), hybridized overnight by immersion in a water bath at 65°C for 16-20 h. Slides were washed once in 400 ml 1 9 SSC-0Á06% SDS at 65°C for 2 min and twice in 400 ml 0Á06 9 SSC for 2 min. The array design is available in BmG@Sbase (accession no. A-BUGS-17; http://bugs.sgul.ac.uk/A-BUGS-17) and also ArrayExpress (accession no. A-BUGS-17). Transcriptomics analysis Microarray slides were scanned using the ScanArray Express HT scanner (Perkin Elmer, Groningen, The Netherlands) following manufacturer's instructions. The GeneSpring GX version 7.3 Software (Agilent Technologies, Santa Clara, CA) was used for normalization and further data analysis. Expression levels were quantified as the log-ratio of the signal derived from RNA isolated from the mutant divided by the signal derived from the WT strain RNA. Expression levels were averaged for the duplicate experiments. Statistical analysis The t-test for three samples, assuming equal variances, was used to determine statistical differences between the means of log reductions in CFU of each isolate and of the wild type. P values of less than 0Á05 were considered to indicate statistical significance. Growth characteristics and colony morphology Growth of WT and AK23 was assessed in liquid and solid media. AK23 demonstrated the typical colony morphology of S. aureus on Plate Count and Baird Parker agar, but it formed significantly smaller colonies than the WT on all solid media (Fig. 1). This phenotype was observed previously by (Karatzas et al. 2007) in piezotolerant S. aureus isolates. Furthermore, AK23 demonstrated a reduced ability to produce the typical golden colour of WT S. aureus (Fig. 1a). The small colony phenotype of AK23 indicated impaired growth which was also confirmed in liquid media when cells were grown under static conditions at 20°C (Fig. 2) and the doubling time was 0Á77 and 1Á46°h for WT and AK23 respectively. However, there was no difference in growth when cells were incubated at 37°C with shaking (data not shown). The above-mentioned phenotype for AK23 was stable for the 10 days of continuous cultivation. Agglutination test Further to identification through colony morphology on Baird Parker, AK23 was also identified as S. aureus by a successful agglutination test. However, the agglutination reaction was significantly weaker for AK23 compared to the WT as it took 23 s for the first and 10 s for the latter (Fig. 3). Furthermore, the coagulation reaction for AK23 was not very strong and produced smaller clumps. Stress resistance of AK23 The AK23 isolate showed significant piezotolerance compared to its WT parent strain (Fig. 4). At 450 MPa for 15 min, AK23 showed only 3 log reduction of CFU per ml, while the WT a significant inactivation of 5Á25 log reduction of CFU per ml. Furthermore, we assessed the thermotolerance of AK23 and we found that it was more resistant than the WT with statistical significance. The AK23 showed a 4Á15 log reduction of CFU ml -1 while the WT a 6Á05 log reduction of CFU per ml (Fig. 5). The piezotolerant phenotype observed for AK23 was stable for the 10 days of continuous cultivation. Gentamicin protection assay We found that AK23 invaded much less efficiently in Caco-2 intestinal epithelial cells, with statistical significance (P < 0Á05). We documented 8Á56 log CFU per ml for the WT and 5Á10 log CFU per ml for AK23 (Fig. 6). Antimicrobial resistance (inhibition zones and minimum inhibitory concentrations) AK23 was significantly more susceptible than the WT to antibiotics amikacin, kanamycin, oxacillin, neomycin and gentamicin. Zones of inhibition to the above antibiotics were significantly larger than the WT (Table 1). Analysis of the MICs also showed that AK23 was more susceptible to antibiotics although they did not change the status of the AK23 to sensitive (Table 1). Deletion present in AK23 and other piezotolerant isolates While ultimately aiming to perform microarray transcriptomic analysis with the WT and the AK23, we first performed a genomic indexing where DNA from the WT was hybridized with the DNA on the microarray slides. Along with WT we took AK23 for these experiments and although all genes on the slide showed hybridization for the WT, genes encompassing SAR0665 until SAR0673 showed no hybridization for AK23. This suggested that a major deletion was present in AK23. Therefore, we designed primers DelFor and DelRev in the intragenic region between SAR0664 and SAR0665 and within the SAR0674 gene respectively. These primers were expected to give short amplicon when used in PCR with DNA from AK23 mutant but not for the WT. The distance between these primers is more than 10 000 bp and it was impossible to give an amplicon with the setup used for this specific PCR reaction. Furthermore, we repeated this PCR with all isolates analysed in the previous work (Karatzas et al. 2007) and we found that all piezotolerant isolates (1, 3, 4, 6, 7, 8, 10, 15, 21) and AK23 gave a positive PCR while all other isolates and the WT, produced no detectable PCR product. Subsequently, we proceeded with sequencing the amplicon obtained from AK23 which showed a major deletion of 9351 bps affecting 10 genes ( Table 2). The deletion most probably occurred through the homologous recombination between two relatively small homologous regions of 10 bps (ATTGCGGGTG) which are present in both genes SAR0665 and SAR0674 (Fig. 7). The deletion encompassed the yts operon (SAR0669-SAR0672), pitA (SAR0674) encoding a putative low-affinity inorganic phosphate transporter and lipA encoding for a putative lipase (SAR0669). Transcriptomic analysis of AK23 We performed transcriptomic microarray analysis of AK23 in comparison with its isogenic WT. We found that all genes upregulated in AK23 were related to phosphate uptake (Table 3). The genes SAR1398-SAR1402 comprise the Pst operon in S. aureus which plays a role in phosphate uptake. Furthermore, the PhoB gene and SAR0110 encoding a putative Na/Pi cotransporter and the hypothetical protein SAR0584, were all upregulated. There were also genes that were downregulated in AK23 of which the majority such as hld, spa, rnaIII, plc, nuc, splB, hysA2, SAR0694 and SAR0304 are part of the agr system (Table 4). Furthermore, genes pyrAA and pyrAB played a role in pyrimidine and arginine biosynthesis, while nrdD and narJ contributing in anaerobic respiration were also downregulated. dltA that contributes in the teichoic acid alanylation affecting membrane integrity and mprF encoding a phosphatidylglycerol lysyltransferase playing a role in antibiotic resistance were also downregulated. Discussion In the previous work we have shown that clonal populations of S. aureus are able to give rise to stable piezotolerant variants (Karatzas et al. 2007), a phenomenon we also observed and investigated previously in L. monocytogenes (Karatzas et al. 2003(Karatzas et al. , 2005. Listeria monocytogenes possesses a hypermutable region located on the ctsR stress gene regulator, giving rise to stress-resistant piezotolerant mutants at a rate of 1 per 10 000 cells in a WT culture. This subpopulation of ctsR mutants is always present in the culture and can survive HHP treatments. Such strains are a major concern since many food products nowadays are pressure treated, and their study is essential to ensure safety of such food products as their design should consider the worst-case scenario. Furthermore, such work could provide us with a broader understanding of stress resistance in bacteria impacting on various scientific areas. During the above work, we have found that the piezotolerant S. aureus variants mentioned above, comprised 52% of the surviving WT population following a challenge of the WT at 400 MPa for 30 min. All these piezotolerant variants showed a small colony phenotype, increased thermotolerance, impaired growth, weaker agglutination reactions, lower invasion to intestinal epithelial cells, defective golden colour production and reduced antibiotic resistance compared to the WT. An initial attempt to identify the genetic basis of this phenotype has been unsuccessful as we did not find any (Karatzas et al. 2007). The selection of ctsR was based on a similar phenotype previously observed in piezotolerant mutants of L. monocytogenes (Karatzas et al. 2003(Karatzas et al. , 2005 while hrcA is another transcriptional regulator of stress genes that could have been involved in this phenotype. During the above work additional isolates were obtained. In the present work, to identify the molecular basis of the piezotolerance, we used one of these additional isolates namely AK23. This was done due to initial problems faced when recovering the original isolates from the freezer which were subsequently resolved. As we decided to proceed with AK23, its phenotype had to be verified. We found that this isolate, similarly to all piezotolerant isolates obtained previously, showed the defective golden colour production (Fig. 1a), small colony phenotype (Fig. 1a,b), impaired growth (Fig. 2), delayed agglutination reaction (Fig. 3), increased piezotolerance (Fig. 4) and thermotolerance (Fig. 5) and reduced invasiveness in Caco-2 intestinal epithelial cells (Fig. 6). Once we established that AK23 shared the same phenotype with the other piezotolerant isolates, we proceeded to identify the genetic basis of this phenotype. We decided to look at its transcriptome through microarray analysis. Prior to transcriptomics, we performed genomic indexing to assess the level of hybridization between each of the gene amplicons on the microarray slides with the genome of the WT and the AK23. During this process, we observed no hybridization with all genes encompassing the genomic region between SAR0666 and SAR0673 which were present in the WT. This suggested the presence of a major deletion in AK23 and we designed primers that could amplify the wider region where the deletion was located. Once we succeeded in recovering all isolates from our previous study, we performed the above PCR which was successful only with the piezotolerant isolates but unsuccessful with the WT and all other remaining isolates. Following sequencing of the PCR amplicons obtained from AK23 and the piezotolerant SCVs we showed that they possessed a major deletion of 9351 bps affecting 10 genes (Table 2). This explains why no amplicons were obtained from the WT and all other nonpiezotolerant isolates, as the replication of such a long amplicon would require different PCR conditions which were not used. We suggest that the deletion was the result of a homologous recombination event between two relatively small homologous regions of 10 bps (ATTGCGGGTG) which are present in both SAR0665 and SAR0674 genes (Fig. 7). In contrast to the L. monocytogenes mutants we identified previously (Karatzas et al. 2003(Karatzas et al. , 2005, these S. aureus piezotolerant mutants could not be subjected to phase variation as the loss of this major genetic region cannot be recovered. The transcriptomic analysis presented here revealed various differences between the AK23 and the WT. A significant percentage of genes upregulated in AK23 were related to phosphate uptake (Table 3). This is probably due to the partial deletion of pitA (Table 2). Overall, S. aureus encodes three distinct P i transporters, PstSCAB, PitA and NptA (Kelliher et al. 2018). PitA is the low affinity P i transporter which is constitutively expressed and its partial deletion in AK23 could result in lower levels of P i in the cell. This in turn could lead to upregulation of the high affinity system for the uptake of inorganic phosphate (Pst operon) and a putative Na/P i cotransporter protein (SAR0110). This partial pitA deletion might be responsible for the SCV and the slow growth phenotype in AK23, as this has been demonstrated previously in a pitA S. aureus deletion mutant (Kelliher et al. 2018). phoU, which was upregulated in AK23, is also another gene playing a role in phosphate metabolism as its negative regulator in Escherichia coli (Li and Zhang, 2007;Overton et al. 2011). However, the role of phoU in phosphate metabolism in S. aureus is not clear although it is widely accepted that it plays an important role which, however, is different from E. coli or Bacillus subtilis (Kelliher et al. 2018). Also the presence of phoU at the pst locus and the common upregulation might be Figure 7 Schematic representation of the deletion in the mutant AK23. A whole region covering 9351 bps has been deleted affecting 10 genes. The mutation occurred through homologous recombination between two short homologous regions of 10 bps (ATTGCGGGTG) phoU might be also involved in the increased susceptibility of AK23 to antibiotics as its mutation has been shown to lead to increased susceptibility to antibiotics in E. coli (Li and Zhang, 2007). However, seeing the upregulation in AK23, it should have had the opposite effect (increased resistance). Nevertheless, as stated above, the function of PhoU is not well-understood in S. aureus. Furthermore, the pitA mutation resulting in Pst operon upregulation, could possibly explain the piezotolerance of AK23. In the previous work on B. subtilis, it has been shown that pstA is upregulated during recovery following an HHP treatment, suggesting that it plays a role in this process (Nguyen et al. 2019). The authors have suggested that HHP treatment may cause phosphate limitation which can lead to upregulation of pstA. Therefore, the AK23 which has a pstA constitutively expressed, due to the loss of pitA, is able to recover much better than the WT after the HHP treatment. Furthermore, phoU which is upregulated in AK23 has been previously implicated in heat resistance in E. coli (Li and Zhang, 2007). The authors have shown that loss of phoU resulted in reduced thermotolerance. We could speculate that the increased expression of phoU could bring about the opposite effect, and increased thermotolerance which we observed in AK23. It is also important to note that in the present work, we did not find any alteration in the expression of toxin genes in AK23 compared to the WT. Based on that, we should assume that toxin, and since we look at piezotolerance to food HHP applications, enterotoxin production per cell is normal and similar to the WT. On the other hand, posttranscriptional effects or, altered toxin production in other stages of growth could not be excluded. Based on these findings, if AK23 has similar per cell toxin production to the WT, it is expected a lower toxin production through time, as a result of the lower growth rate of AK23. There are also other possible reasons for the increased susceptibility of AK23 to antibiotics. The downregulation of mprF in AK23 observed here could be linked to the above phenotype. MprF is a multiple peptide resistance factor in S. aureus encoding a phosphatidylglycerol lysyltransferase that plays an important role in resistance and susceptibility to various antibiotics such as monomycin and vancomycin, methicillin, oxacillin, bacitracin, gentamicin and beta-lactams (Ruzin et al. 2003;Oku et al. 2004;Staubitz et al. 2004). In addition, it plays a role in resistance to synthetic peptides, human defensins (HNP1-3), evasion of oxygen-independent neutrophils and susceptibility to (hBD3, CAP18) and other cationic antimicrobial peptides (Ruzin et al. 2003;Oku et al. 2004;Staubitz et al. 2004). AK23 also showed a partial deletion of SAR0669 which is similar to the ytS gene of B. subtilis that could also be involved in the antibiotic sensitivity of AK23. As it has been shown previously, a DytS mutant in B. subtilis was bacitracin sensitive (Bernard et al. 2003). Furthermore, dltA which is significantly downregulated in AK23 encodes a D-alanine-D-alanyl carrier protein ligase. This gene is involved in the metabolism of teichoic acids affecting membrane integrity which is also involved in antimicrobial susceptibility mainly to cationic antimicrobial peptides (Weidenmaier et al. 2005). All these features might be responsible for the increased susceptibility of AK23 to some of the antimicrobials. Interestingly, the majority of genes downregulated in AK23 (hld, spa, rnaIII, plc, nuc, splB, hysA2, SAR0694 and SAR0304) play a role in virulence and are part of the agr system (Table 4; Reed et al. 2001;Huntzinger et al. 2005;White et al. 2014). The agr system has been previously shown to play a role in invasion to Caco-2 cells (Chessa et al. 2016) which explains the lower invasiveness of AK23 and the other isolates to Caco-2 cells. It is clear that the lower transcription levels of these agr system genes occur due to the downregulation of that accessory gene regulator rnaIII which is its effector gene (Gupta et al. 2015). However, we were as yet unable to identify the reason why rnaIII is downregulated in AK23. Our work links a significant mutation in S. aureus with a range of phenotypic characteristics. However, since the mutation involves a significant number of genes, further work should focus on deleting each one of the genes individually to link them with specific phenotypic characteristics. Given that this mutation affects significant phenotypic characteristics of S. aureus involved in virulence, antibiotic resistance and food safety, such work could have major impact on various areas of science encompassing medical microbiology, antimicrobial chemotherapy and food safety.
2020-09-03T09:04:48.681Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "b58e9706894ffde42d22592ec6a3150f1b958c6b", "oa_license": "CCBY", "oa_url": "https://sfamjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jam.14832", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "bd650513e93397d8167246169199f3450d499f5b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
5929307
pes2o/s2orc
v3-fos-license
Late presentation of toxoplasmosis in renal transplant recipients Toxoplasma gondii is a rare cause of infection in renal transplant recipients and usually occurs within 3 months of transplantation, this being the period of maximum immunosuppression. We report two cases of toxoplasmosis presenting several years after transplantation. One patient developed Toxoplasma retinitis 4 years after renal transplantation and lost peripheral vision in his affected eye. Another developed cerebral toxoplasmosis 6 years following his second renal transplant but did not survive despite treatment. These cases highlight the need for a high index of suspicion of toxoplasmosis as a potential diagnosis even during the later stages of the post-transplant period as survival is poor without early recognition and treatment. Background Toxoplasma gondii is an opportunistic pathogen frequently found in AIDS and heart transplant patients, but it remains a rare cause of infection in renal transplant recipients [1]. It can result from reactivation of latent infection or from primary infection. Acute infection may be transmitted via an allograft from a seropositive donor to a seronegative recipient and tends to occur within 3 months of transplantation, which corresponds to the period of maximum immunosuppression [2,3]. We report two cases of toxoplasmosis occurring several years following renal transplantation. Case report 1 Case 1 was a 47-year-old male with end-stage renal disease of unknown cause who received a cadaveric renal transplant in 2003. The donor was seronegative for T. gondii. Initial immunosuppression consisted of cyclosporin and prednisolone, along with prophylactic cotrimoxazole for 3 months. Shortly afterwards, he had an episode of borderline acute cellular rejection, requiring three doses of methylprednisolone. He had another episode of acute cellular rejection in 2006, requiring a further three doses of methylprednisolone. At this point, he was switched from cyclosporin to tacrolimus. A repeat renal transplant biopsy 1 month later showed ongoing acute cellular rejection, so he received a course of rabbit anti-thymocyte globulin and mycophenolate mofetil (MMF) was added to tacrolimus and prednisolone for immunosuppression. In 2007, he presented with distorted vision in the right eye and was treated for anterior uveitis. His vision continued to worsen with a decrease in visual acuity in the right eye to 6/18. Dilated fundoscopy revealed significant vitritis with punched-out white retinal lesions in the nasal retina, consistent with Toxoplasma retinitis. Treatment with azithromycin was commenced. Two weeks later, visual acuity had decreased to 6/36 in the right eye with unchanged fundoscopic appearances. He underwent a vitreous biopsy. Vitreous PCR was positive for Toxoplasma and the patient also had a positive IgM Ab for T. gondii with a dye test of 4000 IU/mL. He continued azithromycin along with an increased dose of prednisolone. After 3 months, the Toxoplasma retinitis was quiescent and treatment was stopped; however, the patient had lost peripheral vision in his right eye and was unable to distinguish fine details. Case report 2 Case 2 was a 49-year-old male butcher with focal and segmental glomerulosclerosis (FSGS) who received his second cadaveric renal transplant in 2003. The donor was seropositive for T. gondii. Initial immunosuppressive treatment comprised tacrolimus and prednisolone. He received prophylactic cotrimoxazole at a dose of 480 mg three times weekly for 3 months. In 2004, recurrence of FSGS in the allograft was demonstrated on a renal transplant biopsy. At this point, he was commenced on MMF, in addition to tacrolimus and prednisolone. Tacrolimus was stopped in 2008. He was admitted in 2009 with lethargy and anorexia. On examination, the patient was confused with a right-sided pronator drift and hemiparesis. Blood tests revealed stable allograft function with a serum creatinine of 342 µmol/L with a normal white cell count (7.4 × 10 9 /L) and C-reactive protein of 26 mg/L. An unenhanced cranial CT scan showed areas of low attenuation affecting the white matter of both temporal lobes and the left frontoparietal lobe, with mild mass effect (Figure 1). MRI with contrast highlighted multiple ring-enhancing lesions, which were thought to be due to metastatic disease or lymphoma (Figure 2). Treatment with dexamethasone was initiated. He continued to deteriorate and had several generalized tonic-clonic seizures. He was started on IV cotrimoxazole to cover potential toxoplasmosis. Results showed a strongly positive dye test at 1000 IU/mL for T. gondii along with a positive IgM Ab. As these were suggestive of active toxoplasmosis, treatment with pyrimethamine (200 mg stat followed by 50 mg daily), sulphadiazine (500 mg qds) and folinic acid (10-25 mg daily) was commenced. He developed aspiration pneumonia and died. Post-mortem examination of the body was not performed according to the wishes of the family. Discussion T. gondii is an intracellular protozoan parasite, with members of the cat family being the definitive hosts. Transmission to humans is usually by ingestion of undercooked meat containing tissue cysts or ingestion of infectious oocysts via food or water contaminated with feline faeces. In transplant patients, transmission of T. gondii from a seropositive donor to a seronegative recipient is an important potential cause of disease [2]. Therefore, toxoplasmosis in transplant patients can arise from reactivation of latent infection or from primary infection. Wulf et al. [4] reviewed 35 cases of toxoplasmosis following renal transplantation, and of these, 7 (20%) occurred in seropositive recipients and 16 (46%) in seronegative recipients; in 12 cases (34%), the serology was not known. Of the 16 cases resulting from a primary infection, 15 had a seropositive donor and were considered to be transplant-related [4]. In the two cases presented here, the first patient may have developed primary infection after eating undercooked meat whilst on holiday in Tenerife shortly before developing symptoms. The second case was most likely due to reactivation of latent infection as he had previously worked as a butcher and would almost certainly have handled raw meat containing tissue cysts. Toxoplasmosis tends to occur within 3 months of transplantation, this being the period of maximum immunosuppression. In a review of 29 renal transplant recipients with toxoplasmosis [3], 25 of 29 patients developed the infection within 3 months of transplantation. In two cases, infection occurred after more than 1 year, and in another case, more than 2 years had elapsed. The two cases that we present are unusual in that there is a much greater duration between renal transplantation and infection, 4 and 6 years posttransplantation, respectively. Toxoplasmosis following renal transplantation is associated with mortality in up to 65% of recipients [3,4]. This is most likely due to both a lack of clinical awareness and difficulties in confirming the diagnosis. The symptoms are often non-specific, but patients usually present with neurological disturbances or pneumonitis. Constitutional symptoms and signs, such as fever and malaise, are variable. More rarely, patients may also develop chorioretinitis, myocarditis, haemolytic anaemia and haemophagocyticrelated pancytopaenia [2,5]. Concomitant infection with another pathogen is common and can add to diagnostic confusion [1]. Diagnostic tests include serology, isolation of the parasite from infected tissues or nucleic acid amplification by PCR. However, antibody titres can be hard to interpret in immunocompromised patients and correct interpretation of PCR tests is difficult, with varying sensitivity and specificity results reported from different laboratories using the same probes [6]. Therefore, the diagnosis of toxoplasmosis requires a high index of suspicion as prompt recognition and early treatment are key to increasing patient survival [1]. In summary, T. gondii remains a rare but significant pathogen in renal transplant recipients. Although most cases of toxoplasmosis occur shortly after transplantation, the two cases reported here were unusual in that they occurred at least 4 years later. This highlights the need for increased awareness and early diagnosis at all stages of the post-transplant period to improve an otherwise poor outcome of toxoplasmosis in renal transplant patients.
2017-10-29T05:31:33.132Z
2010-06-17T00:00:00.000
{ "year": 2010, "sha1": "114261336868b7aa5ebf255367049677a0f013ab", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/3/5/480/1218075/sfq113.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "114261336868b7aa5ebf255367049677a0f013ab", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226221896
pes2o/s2orc
v3-fos-license
CURE: A Security Architecture with CUstomizable and Resilient Enclaves Security architectures providing Trusted Execution Environments (TEEs) have been an appealing research subject for a wide range of computer systems, from low-end embedded devices to powerful cloud servers. The goal of these architectures is to protect sensitive services in isolated execution contexts, called enclaves. Unfortunately, existing TEE solutions suffer from significant design shortcomings. First, they follow a one-size-fits-all approach offering only a single enclave type, however, different services need flexible enclaves that can adjust to their demands. Second, they cannot efficiently support emerging applications (e.g., Machine Learning as a Service), which require secure channels to peripherals (e.g., accelerators), or the computational power of multiple cores. Third, their protection against cache side-channel attacks is either an afterthought or impractical, i.e., no fine-grained mapping between cache resources and individual enclaves is provided. In this work, we propose CURE, the first security architecture, which tackles these design challenges by providing different types of enclaves: (i) sub-space enclaves provide vertical isolation at all execution privilege levels, (ii) user-space enclaves provide isolated execution to unprivileged applications, and (iii) self-contained enclaves allow isolated execution environments that span multiple privilege levels. Moreover, CURE enables the exclusive assignment of system resources, e.g., peripherals, CPU cores, or cache resources to single enclaves. CURE requires minimal hardware changes while significantly improving the state of the art of hardware-assisted security architectures. We implemented CURE on a RISC-V-based SoC and thoroughly evaluated our prototype in terms of hardware and performance overhead. CURE imposes a geometric mean performance overhead of 15.33% on standard benchmarks. Introduction For decades, software attacks on modern computer systems have been a persisting challenge leading to a continuous arms race between attacks and defenses. The ongoing discovery of exploitable bugs in the large code bases of commodity operating systems have proven them unsuitable for reliable protection of sensitive services [104,105]. This motivated various hardware-assisted security architectures integrating hardware security primitives tightly into the System-on-Chip (SoC). Capability-based systems, such as CHERI [100], CODOMs [95], IMIX [30], or HDFI [82], offer fine-grained protection through (in-process) sandboxing, however, they cannot protect against privileged software adversaries (e.g., a malicious OS). In contrast, security architectures providing Trusted Execution Environments (TEE) enable isolated containers, also called enclaves. Enclaves allow for a coarsegrained but strong protection against adversaries in privileged software layers. TEE architectures have been proposed for a variety of computing platforms 1 , in particular for modern high-performance computer systems, e.g., industry solutions like Intel SGX [35], AMD SEV [38], ARM TrustZone [3], or academic solutions such as Sanctum [22], Sanctuary [10], Keystone [48], or Komodo [27] to name some. In this paper, we focus on TEE architectures for modern high-performance computer systems. We investigate the shortcomings of existing TEE architectures and propose an enhanced and significantly more flexible TEE architecture with a prototype implementation for the open RISC-V architecture. Deficiencies of existing TEE architectures. So far, existing TEE architectures have adopted a one-size-fits-all enclave approach. They provide only one type of enclave requiring applications and services to be adapted to these enclaves' features and limitations, e.g., Intel SGX restricts system calls of its enclaves and thus, applications need to be modified when being ported to SGX which produces additional costs. Additional efforts like Microsoft's Haven framework [5] or Graphene [87] are needed to deploy unmodified applications to SGX enclaves. Moreover, today, we are using diverse services that process sensitive data, e.g., payment, biometric authentication, smart contracts, speech processing, Machine Learning as a Service (MLaaS), and many more. Each service imposes a different set of requirements on the underlying TEE architecture. One important requirement concerns the ability to securely connect to devices. For example on mobile devices, privacy-sensitive data is constantly collected over various sensors, e.g., audio [9], video [83], or biometric data [19]. On cloud servers, massive amounts of sensitive data are aggregated and used to train proprietary machine learning models, often outside of the CPU, offloaded to hardware accelerators [84]. However, TEE architectures such as SGX [35], SEV [38] and Sanctum [22], do not consider secure I/O at all, solutions such as Keystone [48] would require additional hardware to support DMA-capable peripherals, solutions like Graviton [96] require hardware changes at the peripheral side. TrustZone [3], Sanctuary [10] and Komodo [27] cannot bind peripherals directly to individual enclaves. Another important requirement imposed on TEE architectures is an adequate and practical protection against sidechannel attacks, e.g., cache [11,50] or controlled side-channel attacks [65,92,101]. Current TEE architectures either do not include cache side-channel attacks in their threat model, like SGX [35], or TrustZone [3], only provide impractical solutions which heavily influence the OS, like Sanctum [22], or do not consider controlled side-channel attacks, e.g., SEV [38]. We will elaborate on the related work and the problems of existing TEE architectures in detail in Section 9. This work. In this paper, we present a TEE architecture, coined CURE, that tackles the problems of existing solutions with a cost-effective and architecture-agnostic design. CURE offers multiple types of enclaves: (i) sub-space enclaves that isolate only parts of an execution context, (ii) user-space enclaves, which are tightly integrated into the operating system, and (iii) self-sustained enclaves, which can span multiple CPU-cores and privilege levels. Thus, CURE is the first TEE architecture offering a high degree of freedom in adjusting enclave boundaries to fulfill the individual functionality and security requirements of modern sensitive services such as MLaaS. CURE can bind peripherals, with and without DMA support, exclusively to individual enclaves. Further, it provides side-channel protection via flexible and fine-grained cache resource allocation. Challenges. Building a TEE architecture with the described properties comes with a number of challenges. (i) New hardware security primitives must be developed that allow enclaves to adapt to different functionality and security requirements. (ii) Even though the security primitives should allow flexible enclaves, they must not require invasive hardware modification, which would impede cross-platform adoption. (iii) While the changes in hardware should remain small, performance overhead for managing enclaves in software must be minimized. (iv) Protections against the emerging threat of microarchitectural attacks in form of side-channel and transient-execution attacks must be considered in the design for all types of enclaves. Contributions. Our design of CURE and its implementation on the RISC-V platform tackles all these challenges. To summarize, our main contributions are as follows: • We present CURE, our novel architecture-agnostic design for a flexible TEE architecture which can protect unmodified sensitive services in multiple enclave types, ranging from enclaves in user space, over sub-space enclaves, to self-contained (multi-core) enclaves which include privileged software levels and support enclaveto-peripheral binding. • We introduce novel hardware security primitives for the CPU cores, system bus and shared cache, requiring minimal and non-invasive hardware modifications. • We prototype CURE for the open RISC-V platform using the open-source Rocket Chip generator [4]. • We evaluate CURE's hardware and software components in terms of added logic and lines of code, and CURE's performance overhead on an FPGA and cycle-accurate simulator setup using micro-and macrobenchmarks. System Assumptions CURE targets a modern high-performance multi-core system, with common performance optimizations like data and instruction caches, a Translation Lookaside Buffer (TLB), shared caches, branch predictors, respective instructions to flush the core-exclusive resources, and a central system bus that connects the CPU with the main memory (over a dedicated memory controller) and various peripherals. System bus and peripherals. The system bus connects the CPU to a plethora of system peripherals over a fixed set of hardwired peripheral controllers. The peripherals range from storage, communication, and input devices to specialized compute units, e.g., hardware accelerators [37]. The CPU interacts with peripherals using parts of the internal peripheral memory which are mapped to the address space of the CPU, called Memory-Mapped I/O (MMIO). We assume that the CPU can nullify the internal memory of a peripheral to sanitize its state. Every access from the CPU to a peripheral is decoded in the system bus and delegated to the corresponding peripheral. The CPU acts as a parent on the system bus, whereas the peripherals (and main memory) act as childs that respond to requests from a parent. However, MMIO is not sufficient for some peripherals where large amounts of data need to be shared with the CPU since the CPU needs to copy the data from the main memory to the peripheral memory. Therefore, these peripherals are often connected to the system bus as parents over Direct Memory Access (DMA) controllers, allowing them to directly access the main memory. To cope with resource contention in these complex interconnects, system buses also incorporate arbitration mechanisms to schedule the establishment of parent-child connections when multiple bus requests occur simultaneously. Software privilege levels. We assume the CPU supports the privilege levels (PLs) as shown in Figure 1. In line with modern processors (Intel [21], AMD [34] or ARM [55]), we assume a separation between a user-space layer (PL3) and a more privileged kernel-space layer (PL2), which is performed by the MMU (configured by PL2 software) through virtual address spaces. The CPU may support a distinct layer for hypervisor software (PL1) to run virtualized OS in Virtual Machines (VMs), where the separation to PL2 is performed by a second level of hardware-assisted address translation [73]. Lastly, we assume a highly-privileged layer (PL0) which contains firmware that performs specific tasks, e.g., hardware emulation or power management. We assume that the system performs secure boot on reset, whereas the first bootloader stored in CPU Ready-Only Memory (ROM), verifies the firmware through a chain of trust [53]. After verification, the firmware starts execution from a predefined address in the firmware code and loads the current firmware state from non-volatile memory (NVM) where it is stored encrypted, integrity-and rollback-protected. The cryptographic keys to decrypt and verify the firmware state are passed by the bootloader which loads the firmware into Random-access Memory (RAM). Rollback protection can be achieved, e.g., by making use of non-volatile memory with Replay Protected Memory Block (RPMB) partitions or by using eFuses as secure monotonic counters [56]. When a system shutdown is performed, the firmware stores its state in the NVM, encrypted and integrity-and rollback-protected. Adversary Model Our adversary model adheres to the one commonly assumed for TEE architectures, i.e., a strong software-only adversary that can compromise all software components, including the OS, except a small software/microcode Trusted Computing Base (TCB) which configures the hardware security primitives of the system, manages the enclaves and which is inherently trusted [3,10,22,27,35,48]. We assume that the goal of the adversary is to leak secret information from the TCB or from a victim enclave. An adversary with full control of the system software can inject own code into the kernel (PL2) and even into the hypervisor (PL1). This allows the adversary, with full access to the TCB interface used for setting up enclaves, to spawn malicious processes and even enclaves. Even though the adversary cannot change the firmware code (which uses secure boot), memory corruption vulnerabilities might still be present in the code and be exploitable by the adversary [24]. In addition, we assume that an adversary is able to compromise peripherals from software to perform DMA attacks [63,76]. We assume the underlying hardware to be correct and trusted, and hence, exclude attacks that exploit hardware flaws [40,86]. We also do not assume physical access, and thus, fault injection attacks [6], physical side-channel attacks [46,62] or the physical connection of malicious peripherals are out of scope. We do not consider Denial-of-Service (DoS) attacks in which the adversary starves an enclave since an adversary with control over the OS can shut down the complete system trivially. As standard for TEE architectures, CURE does not protect from software-exploitable vulnerabilities in the enclave code but prevents their exploitation from compromising the complete system. Requirements Analysis To provide customizable, practical and strongly-isolated enclaves, CURE must fulfill a number of security and functionality requirements. We list them in the following section, and show in Section 7 how CURE fulfills the security requirements. In Section 6 and Section 8, we demonstrate how the functionality requirements are met. Security Requirements (SR) SR.1: Enclave protection. Enclave code must be integrityprotected when at rest, and inaccessible for an adversary when executed. All sensitive enclave data must remain confidential and integrity-protected at all times. An enclave must be protected from adversaries on all software layers (PL3-PL0), other potentially malicious enclaves, and DMA attacks [63,76]. SR.2: Hardware security primitives. The protection of the enclaves must be enforced by secure hardware components which can only be configured by the software TCB. SR.3: Minimal software TCB. The TCB must be protected from adversaries in all software layers (PL3-PL0) and minimal in size to be formally verifiable, i.e., a few KLOCs [44]. SR.4: Side-channel attack resilience. Mitigations against the most relevant software side-channel attacks must be available, namely, side-channel attacks on cache resources [31,50,70,102], controlled side-channel attacks [65,92,101] and transient-execution attacks [12,14,43,45,78,89,90,93]. Functionality Requirements (FR) FR.1: Dynamic enclave boundaries. The trust boundaries of an enclave must be freely configurable such that enclaves at different privilege levels can be supported. FR.2: Enclave-to-peripheral binding. Secure communication between enclaves and selected system peripherals, e.g., when offloading sensitive machine learning tasks to hardware accelerators [84], must be explicitly supported. FR.3: Minimal hardware changes. The hardware changes required to integrate the proposed security primitives into a commodity SoC (cf. Section 2) must be minimal, no invasive changes to CPU internals must be required to enable a higher adoption of CURE in future platforms. FR.4: Reasonable performance overhead. The performance overhead incurred during enclave setup and run time must be minimized and must not render the computer system impractical for certain uses cases or degrade user experience. FR.5: Configurable protection mechanisms. Protection mechanisms against cache side-channel attacks must be applicable dynamically at run time and on a per-enclave basis. Design of the CURE Architecture CURE provides a novel design that addresses the requirements described above and provides a TEE architecture with strongly-isolated and highly customizable enclaves, which can be adapted to the requirements of the services they protect. Unlike other TEE architectures, which only provide a single enclave-type, CURE allows to freely define enclave boundaries and thus, different enclaves can be constructed, as shown in Figure 2. First, in Section 5.1, we describe the ecosystem around CURE. Then, we elaborate on the different enclave types in Section 5.2. CURE's key component enabling this flexible enclave construction is its enclave ID-based access control in the system bus which manages all per-enclave resource mappings, e.g, peripherals or main memory, indicated by the different background patterns in Figure 2 and Figure 3. Our hardware primitives are presented in Section 5.3. CURE Ecosystem The ecosystem around CURE consists of device vendors which produce the devices implementing CURE, device users and service providers. Some services contain sensitive data (from the users and/or the service provider) and thus, must be protected. In CURE, sensitive services are either split into a sensitive and a non-sensitive part, which get included into an enclave and an user-space app (called host app), respectively, or alternatively, integrated entirely into an enclave, requiring only minimal modifications at the service. In the later case, the host app is only needed to trigger the enclave. Initially, the enclave binary does not contain sensitive data. For every enclave, the service provider creates a configuration file which contains the enclave's requirements regarding system resources (e.g., memory, caches or peripherals), a Figure 2: CURE privilege levels and enclave types, namely, user-space enclaves (Encl 1 ), kernel-space enclaves (Encl 2 , Encl 3 ) and sub-space enclaves (Encl 4 ). version number and an enclave label . Enclave binary, configuration file and host app are bundled and deployed by the service provider over an app store (e.g., Google Play Store) which is operated by a third party (e.g., Google). The label is globally unique in the app store. Every service provider creates an asymmetric key pair and , and a public key certificate , which is signed by the app store operator. Using the secret key , the service provider signs the enclave binary and configuration file ( ) and attaches it, together with , to the app bundle. can later be used on the device to verify . For this, a certificate chain ℎ up to the root certificate of the app store operator must be present on the device. When the service provider wants to update an enclave, a new signature must be created and the version number in the configuration file updated which prevents rollbacks to older (possibly flawed) versions of an enclave [103]. A device vendor creates a unique asymmetric key pair and for each device, which is provisioned to the device during production, and a public key certificate signed by the device vendor which can later be used to prove the legitimacy of the device in a remote attestation scheme. For this, the service provider must obtain a certificate chain ℎ up to the root certificate of the device vendor. When a device was compromised, can also be revoked. Customizable and Resilient Enclaves CURE supports enclaves that protect user-space processes (Encl 1 ), run in the kernel space (Encl 2 ) or span the kernel and user space (Encl 3 ). However, an enclave does not necessarily include all code of a privilege level, e.g., an enclave can only comprise parts of the firmware code (Encl 4 ). Enclave Management Before describing the different enclave types supported by CURE, we give an overview on CURE's enclave management. Security monitor. All CURE enclaves are managed by the software TCB, called Security Monitor (SM), as in other TEE architectures [22,48]. As indicated in Figure 2, the SM itself represents an enclave which is part of the firmware. As described in Section 2, we assume a system that performs a secure boot on reset, verifies the firmware (including the SM) and then jumps to the entry point of the SM. Further, we assume that the SM has already loaded its rollback protected state into the volatile main memory. The SM state contains , , , ℎ and a structure for each enclave installed on the device. Enclave installation. When an enclave is deployed to the device, the SM first verifies the signature using and ℎ . Then, the SM creates a new enclave meta-data structure and stores , and in it. Moreover, the SM creates an enclave state structure which is used to persistently store all sensitive enclave data. The SM also creates an authenticated encryption key which is used to protect the enclave state when it is stored to disk or flash memory. and are also stored in . Initially, only contains an authenticated encryption key created by the SM, which is used by the enclave to encrypt and integrity protect data communicated to the untrusted OS, and a monotonic counter. The enclave meta-data structure also contains a monotonic counter used to rollback protect the enclave state. Enclave setup & teardown. The setup of an enclave is always triggered by the corresponding host app. After the OS loads the enclave binary and configuration file, it performs a context switch to the SM. The SM identifies the enclave by the label and begins the enclave setup by (1) configuring the hardware security primitives (Section 5.3) such that one or multiple continuous physical memory regions (according to the configuration file) are exclusively assigned to the enclave in order to isolate the enclave from the rest of the system software. Since the binary and configuration file are loaded from untrusted software, their integrity must always be verified using and . Assigning physical memory regions is inevitable when providing enclaves which are able to execute privileged software (kernel-space enclave), since this allows the enclave to control the MMU. Thus, virtual memory cannot be used to effectively isolate the enclave. (2) After enclave verification, the SM configures the hardware primitives to assign also the rest of the system resources, e.g., cache or peripherals, to the enclave according to the configuration file. All assigned resources are also noted in . Moreover, the SM assigns an identifier to the enclave which is stored in and which is unique for every enclave currently active on the device. The SM can manage up to (implementation defined) enclaves in parallel. We provide more details on the meaning of the enclave identifier in Section 5.3. (3) In the last step, the enclave state is restored, i.e., loaded from disk or flash memory, decrypted and verified using , and then copied to the enclave memory such that it is accessible during enclave runtime. The SM also checks that the monotonic counter in matches the counter stored in . The SM configures all interrupts to be routed to the SM while an enclave is running. Thus, the SM fully controls the context switches into and out of an enclave. While the SM is executed, all interrupts on the CPU core executing the SM are disabled. All other cores remain interrupt responsive. In CURE, hardware-assisted hyperthreading is disabled during enclave execution to prevent data leakage through resources shared between the hardware threads. Alternatively, all hardware threads of a CPU core could also be assigned to the enclave if the enclave code benefits from parallelization. In the reminder of the paper, we assume that hyperthreading is disabled during enclave runtime. After the setup is complete, the SM jumps to the entry point of the enclave. During the enclave teardown, which can be triggered by the host app or the enclave itself, the SM securely stores the enclave state (using ), while incrementing the monotonic counters in and , removes all enclave data from the memory and caches and reconfigures the hardware primitives. Enclave execution. At run time, enclaves can access services provided by the SM over its API, e.g., to dynamically increase the enclave's memory or to receive an integrity report which the SM creates by signing with and by attaching . The integrity report is then send to the service provider by the enclave. Subsequently, using ℎ , the service provider can perform a remote attestation of the enclave. Only if the attestation succeeds, the service provider provisions sensitive data to the enclave. More complex remote attestation schemes [61] could also be implemented. Enclaves might use services of the untrusted OS which do not require access to the plain sensitive enclave data, e.g., file or network I/O. For those cases, an enclave can utilize , which is part of , to protect its sensitive data. CURE also allows multiple enclaves to share encrypted sensitive data over the OS. However, the required key exchange is assumed to be performed over the back ends of the service providers and thus, out-of-scope for CURE. Every enclave which includes a cryptographic library can also create own keys (apart from ) and store them in . Thus, enclaves can also implement key rotation, revocation or recovery schemes which is, however, the responsibility of the service provider and thus, out-of-scope for CURE. On every enclave setup/teardown and context switch in and out of an enclave, the SM flushes all core-exclusive cache resources, i.e., the data cache, the TLB and the BTB, thereby preventing information leakage across execution contexts. User-space Enclaves User-space enclaves (Encl 1 in Figure 2) comprise a complete user-space process. OS integration. The key characteristic of a user-space enclave is its tight integration into the OS, i.e., it relies on the OS for memory management, exception/interrupt handling and other services provided through syscalls (e.g., file system or network I/O). The OS schedules user-space enclaves like normal user-spaces processes, only that the context switches in and out of the enclave are intercepted by the SM. The OS's services are used by all user-space enclaves which prevents code duplication. Moreover, user-space enclaves do not contain management software, leading to smaller binaries. Controlled side-channel defenses. In controlled sidechannel attacks, the adversary gains information about an enclave's execution state by observing usage of resources managed by the OS, predominantly page tables [65,92,101]. CURE defends against these attacks by moving the page tables of user-space enclaves into the enclave memory. More subtle controlled side-channel attacks exploit the fact that the enclave's interrupt handling is performed by the OS [91]. CURE also mitigates these attacks by allowing each enclave to register trap handlers to observe its own interrupt behavior, and act accordingly if a suspicious behavior is detected [15,79]. Limitations & usage scenarios. A user-space enclave cannot run higher-privileged code, e.g., device drivers. Thus, all sensitive data shared with a peripheral has to be processed by drivers in the untrusted OS and thus, is unprotected if not encrypted. Hence, user-space enclaves are unable to protect sensitive services which interact with devices like sensors or GPUs. Instead, user-space enclave are beneficial when protecting short-living services that can rely on encrypted data transmission, e.g., One Time Password (OTP) generators, payment services, digital key services and many more. Kernel-space Enclaves Kernel-space enclaves can comprise only the kernel space (Encl 2 ), or the kernel and user space (Encl 3 ). Providing OS services. The key characteristic of a kernelspace enclave is its capability to run code bare-metal on a CPU core in the privileged (PL2) software layer or even in the hypervisor level (PL1) if available. Thus, OS services, e.g. memory management, can be implemented inside the enclave in a runtime (RT) component ( Figure 2). This results in less resource sharing with the untrusted OS, and thus, it is easier to protect against controlled side-channel attacks [91,92,101]. Moreover, by including device drivers into the RT, a secure communication channel to peripherals can be established. Furthermore, kernel-space enclaves provide more computational power since CURE allows to run kernel-space enclaves across multiple cores. In CURE, peripherals can either be assigned exclusively to a single enclave, by the SM, at enclave setup or shared between different enclaves and/or the OS. The peripheral's internal memory is flushed by the SM when (re-)assigned to a new entity to prevent information leakage [49,72,107]. Protecting virtual machines. CURE's ability to include the kernel space into the enclave allows the construction of enclaves that encapsulate complete virtual machines (VMs). VMs are not self-contained but rely on memory and peripheral management services provided by a hypervisor, which makes the VM enclave vulnerable to controlled side-channel attacks [38,51]. CURE mitigates this by moving the VM page tables into the enclave memory and including unmodified complete drivers into the enclave to avoid dependencies on the untrusted hypervisor [16,17]. As for other kernelspace enclaves, peripherals are temporarily assigned to VM enclaves by the SM. Again, before a peripheral is reassigned, its internal memory is sanitized by the SM. Limitations & usage scenarios. Sensitive services can be ported to kernel-space enclaves without changing them. However, in contrast to user-space enclaves, an enclave RT needs to be added which increases the binary size, adds development overhead and increases the memory consumption. Moreover, the CPU cores selected for the enclave first have to be freed from pending processes, detached from the OS and the RT booted on them. Nevertheless, kernel-space enclaves are required when protecting services which heavily rely on peripheral communication, e.g., authentication services using biometric sensors, ML services collecting input data over sensors or offloading computations to accelerators, DRM services or in general services which require secure I/O. Sub-space Enclaves In CURE, enclave trust boundaries can be freely defined which allows to construct fine-grained enclaves that only include parts of the software residing in a privilege level, therefore called sub-space enclaves. Shrinking the TCB. Sub-space enclaves are especially appealing when constructed in the highest privilege level (PL0) of the system (Encl 4 in Figure 2). In CURE, sub-space enclaves are used to isolate the SM from the firmware code to protect against exploitable memory corruption vulnerabilities that might be present in the firmware code [24]. Moreover, hardware countermeasures, described in Section 5.3, are used to prevent the firmware code from accessing the SM data or hardware primitives. Ultimately, this minimizes the software TCB in CURE, as opposed to other TEE architectures that rely on a software TCB containing all code in the highest privilege level, i.e., EL3 (ARM) or the machine level (RISC-V), e.g., TrustZone [3], Sanctuary [10], Sanctum [22], Keystone [48]. Figure 3: CURE Security Primitives (SPs), added at core register files (SP1), system bus (SP2) and shared cache (SP3). Hardware Security Primitives To provide CURE's customizable enclaves, new security primitives (SP) are needed in hardware. Our SPs augment the register file of each CPU core (SP1), the system bus (SP2) and the shared cache (SP3). Figure 3 shows where CURE's SPs integrate in a modern system as assumed in Section 2. Defining Enclave Execution Contexts (SP1) Enclave ID register. In CURE, enclave execution contexts are defined using IDs, which are saved in a register that is added to every CPU core of the system (SP1). At any point in time, the value of this register, called eid (enclave ID) register, indicates which enclave a core currently executes. The eid registers are set by the SM during enclave setup, teardown and any context switch in and out of an enclave, thus, enabling flexible configuration of enclave boundaries. Whenever an enclave is set up, the SM assigns it an unused ID. In contrast to the constant enclave labels (Section 5.2.1) , which are globally unique, an enclave ID is only valid as long as the enclave is loaded in memory. When an enclave is torn down, the ID gets freed and can be assigned to the next enclave. Constant IDs are only assigned to the SM and all untrusted software. The number of different IDs ( ) that can be stored in eid defines how many enclaves can run in parallel (Section 5.2.1). However, the total number of enclaves that can be deployed is not restricted. Propagating the enclave ID. The enclave ID is propagated through the entire system and used in the SPs to perform access control on the system resources. We incorporate the enclave ID in the bus protocol between the CPU, shared cache and system bus. In protocols like AMBA AXI4/ACE [54], the de facto on-chip communication standard, no protocol extensions are required since the bus channels provide optional user-defined signals which can be utilized to transmit the enclave ID in bus transactions. In our CURE prototype, we extend the TileLink protocol [80] by an enclave ID signal, which we describe in more detail in Section 6. Access Control on the Bus (SP2) In order to isolate enclaves and assign peripherals to them, access control mechanisms need to be implemented in hardware. As described in Section 2, the system bus represents the central gateway of a computer system that connects bus parents (CPU or DMA devices) with bus childs (peripherals or the main memory) and routes all their transactions. CURE leverages this centralization and further extends it to perform access control on parent-child transactions (SP2 in Figure 3). Incorporating carefully crafted access control at the system bus, with latency and performance in mind, reduces the overall hardware costs significantly. Enclave memory isolation. One key task of a TEE architecture is enforcing strong isolation of the enclave code and data in the main memory. In CURE, this is achieved by performing access control in the arbiter logic in front of the main memory chip, as shown in Figure 3. This requires adding new registers and control logic to the already existing arbiter, which can only be configured (over MMIO) by the SM to assign memory regions to enclaves. Whenever the CPU requests access to a memory address, the arbiter uses the enclave ID signal, which is sent within the bus transaction, to verify if the enclave currently executing is allowed to access the memory region. At access violation, the memory access is prevented and an interrupt is triggered by the system bus, which is handled by the SM. Incorporating the required logic for this access control at the main memory side, instead of the CPU side, reduces the additional registers and logic required, which would otherwise be duplicated for every CPU core, as we show in Section 8.1. Assigning peripherals to enclaves. The CPU interacts with peripherals over peripheral memory mapped to the CPU address space (MMIO). In CURE, access control on the MMIO memory is performed using registers and control logic added to the arbiter at the peripheral bus. The SM assigns the MMIO region of every peripheral either to one enclave exclusively or to multiple enclaves/OS by configuring the arbiter registers. Access control is then performed in the added hardware logic based on the enclave ID signal of a bus transaction. Incorporating this logic at the CPU side would have increased the hardware costs because of per-core duplication. DMA protection. Peripherals which share large amounts of data with the CPU typically access the main memory directly over a DMA controller. CURE must protect enclaves from DMA attacks [63,76] and also allow to assign DMA-capable peripherals to enclaves. To achieve this, CURE adds registers and control logic to the decoder in front of every DMA device. These registers define which memory regions the DMA device is allowed to access. Whenever a DMA device gets assigned to an enclave, the SM updates the device registers accordingly. Adding the required logic at the child arbiters would increase the hardware costs because enclave IDs would also need to be assigned to the DMA devices which would result in additional logic for ID comparison. By assigning dedicated memory regions to an enclave and a DMA-capable peripheral, and by assigning the MMIO memory regions of that peripheral exclusively to the enclave, CURE achieves an enclave-to-peripheral binding. Since neither the OS nor any other enclave can access the memory regions over which the bound enclave and peripheral communicate, no encryption or authentication schemes are required. On-Demand Cache Partitioning (SP3) CURE's enclave management (in Section 5.2.1) mitigates sidechannel attacks on core-exclusive resources, such as the L1 cache, by flushing all such structures at every enclave context switch. Nevertheless, this still leaves enclaves vulnerable to cross-core attacks on the shared last-level cache [36,39,102]. However, vulnerability to these sophisticated attacks depends on whether the enclave code performs memory accesses dependent on sensitive data. While algorithms and implementations can be constructed leakage-resilient [2,68], this is not directly applicable to any given application code, and thus, we provide on-demand per-enclave cache partitioning in CURE. Security guarantees for cache side-channel resilience can be provided in hardware by either enforcing strict partitioning of resources across the different enclaves [42,58,97] or deploying randomization-based cache schemes [59,60]. Nevertheless, these schemes either reduce the cache resources available for an enclave or incur additional access latency. This results in an inevitable performance overhead on the protected as well as unprotected software. The additional security guarantee, along with its resulting performance cost, is not usually required for all enclaves and largely depends on the use case. Thus, CURE addresses these diverse enclave requirements and incorporates on-demand way-based partitioning of the shared cache (SP3 in Figure 3). This allows that cache partitioning is enabled and configured individually and dynamically for each enclave at setup and runtime. Each cache way can be allocated exclusively to an enclave. Access control on the enclave ID signal of the memory access transaction is used to permit the enclave to access (read/write or even evict) a cache way, thus ensuring strict isolation. However, when this cache isolation is not enabled for an enclave, only read/write access control on the owner enclave of each cache line is performed. This defends against a privileged adversary that can access cached enclave memory by mapping it into its own address space. As each cache line is owned by a single enclave at any point in time, access control on cache lines corresponding to shared memory between enclaves and the OS is a challenge. To address this, the SM flushes relevant cache lines at context switches between an enclave and the OS while managing shared-memory communication. We deploy way-based partitioning because it is the least extensive in terms of hardware modifications. However, CURE provides the necessary infrastructure and mechanisms (by identifying each enclave and propagating this throughout the system bus and shared cache) to incorporate more sophisticated side-channel-resilient cache designs [25,74,99]. Prototyping CURE on RISC-V While CURE is architecture-agnostic and can be ported to other ISAs, we prototype it here for a RISC-V system based on the open-source Rocket Chip generator [4]. We describe next our CURE instantiation, followed by details on the implemented enclave types and hardware security primitives. RISC-V System-on-Chip platform. We build a RISC-V System-on-Chip (SoC) using the Rocket Chip generator [4]. For prototyping, we equipped the SoC with multiple in-order Rocket cores, in line with prototyping efforts in related work [22]. Each Rocket core has one hart (representing a hardware thread), an own MMU, BTB, TLB and L1 cache. The SoC also contains a system bus which connects the cores to system peripherals (over the peripheral bus) and system main memory. We integrate a shared L2 cache [81] between the system bus and the main memory. A DMA device is connected to the system bus as a bus parent. As a result, this SoC resembles our assumed platform shown in Figure 3, except that the L2 cache is integrated as a last-level cache after the system bus. We implement our prototype on this SoC aiming to maintain minimum hardware and no additional latency. We use 4 bits to represent the enclave ID, i.e., our prototype can distinguish 16 ( ) enclaves, where ID 0 is statically assigned to the OS, ID 0xF to the Security Monitor (SM) and ID 0xE to the firmware (explained in Section 6.2.2). The remaining 13 IDs can be freely assigned to enclaves. We assign one continuous physical memory region to each enclave, resulting in the memory layout shown in Figure 4. We choose to assign only one region per enclave to simplify our prototype and minimize the induced hardware overhead. The CURE design, however, also allows for multiple continuous regions per enclave. The SM and firmware memory regions are adjacent since they are both deployed as part of the bootloader [29]. All regions not assigned to an enclave, SM or the firmware, belong to the OS. Supporting more enclaves in parallel is possible if the additional hardware overhead is acceptable. Software stack. The Rocket core supports three software privilege levels (user, supervisor and machine). Hypervisor support is still a work-in-progress [28] and thus, we do not consider it in our prototype. In the supervisor level, we use an OS consisting of a modified Linux LTS kernel 4.19 with a Busybox 1.29.3 environment. We add a custom kernel module which performs security-uncritical tasks during the enclave setup. We implement the SM in the machine level as a sub-space enclave to separate it from the firmware which runs in the same privilege level. Cryptographic underpinnings. In the implemented CURE prototype, we use Ed25519 [71] as the digital signature scheme for the signing and verification of the enclave signature and the integrity report used for remote attestation, as described in Section 5.2.1. Thus, / and / are Ed25519 key pairs. The public key certificates and are implemented in the X.509 format. In our CURE prototype, the certificate chains ℎ and ℎ required to verify and are, for the sake of simplicity, represented by two Ed25519 public keys. As described in Section 5.2.1, ℎ is included in the SM, whereas ℎ is required at the service provider. The enclave state and enclave data communicated with the OS are protected through authenticated encryption, using the keys and , respectively. We use AES-GCM from libtomcrypt 1.18.2. [52] as the authenticated encryption scheme and include it in the SM. Moreover, we also add it to our implemented enclaves, such that the enclaves can create additional keys. Consistent with Section 5.2.1, the SM holds a metadata structure for each enclave which contains , , and , whereas is part of . Software CURE Enclaves Our CURE prototype implements user-space enclaves, kernelspace enclaves and sub-space enclaves and thus, fulfills requirement FR.1 (Section 4.2). In the following, we describe the enclave memory layout and give implementation details on each enclave type. Enclave Memory Layout In our prototype, each enclave is assigned a continuous physical memory region which is allocated during enclave setup using Linux's Contiguous Memory Allocator (CMA). The enclave memory layout is shown in Figure 5. At the lowest address, the enclave code and data pages are loaded by the OS. The enclave page tables are only stored in the enclave memory while the memory management is performed by the untrusted OS. During the enclave setup, the SM loads the enclave state into the enclave memory. The free memory space is used for dynamic memory allocation. The memory region at the highest address is used for the communication between enclave and OS. Since our prototype allows one continuous memory region per enclave, the shared memory region is either assigned to the communicating enclave or to no enclave, which automatically assigns the region to the OS. When the enclave is set up, the address of the shared memory region is communicated to the OS via the return value of the SM call. The enclave is informed by storing the address information on the stack of the enclave. The size of the enclave state and shared region can be freely set, we set them to 64 bytes and 4 KB, respectively. Security Monitor We implement the SM as a sub-space enclave (Enc 5 in Figure 2) separated from the firmware in memory (Figure 4), which is enforced by the hardware security primitives. However, this leaves the firmware with access to the securitycritical machine level registers eid, which we added, and mtvec, which holds the base address of the trap vector that the core jumps to after an interrupt. To prevent the firmware from configuring these registers, we implement a hardware mechanism that ensures that the eid and mtvec registers can only be written to when the eid register is set to the SM ID (0xF). The eid register is, in turn, set to 0xF by the hardware when performing a context switch to machine mode that traps in the SM. User-space Enclaves Memory management. Since the memory management of the user-space enclave (Enc 1 in Figure 2) is performed by the untrusted OS, we include the enclave page tables in the enclave memory, to prevent page table based attacks [65,92,101]. During enclave setup, the OS creates the page tables exactly as for a normal process. However, the OS turns off demand paging and maps all code and data pages to prevent page faults during enclave execution. The page tables are then handed to the SM which verifies their validity. Moreover, the SM verifies that the supverisor address translation and protection (satp) register, which holds the address of the root page table, points into the enclave memory. Subsequently, the page tables are copied to the enclave memory. Once the enclave is setup, the OS cannot alter the page tables anymore. When the dynamic allocation of memory leads to a page fault, the OS creates a new page table entry and passes it to the SM which includes it into the page tables. Syscalls. Our prototype provides enclaves which can use OS services, e.g., file or network I/O, over Linux syscalls which trap in the SM. We include AES-GCM into the enclaves to encrypt and integrity-protect sensitive data shared with the OS, using . Enclaves are always exited through the SM which is enforced by clearing the machine exception dele-gation (medeleg), machine interrupt delegation (mideleg), supervisor exception delegation (sedeleg) and supervisor interrupt delegation (sideleg) registers during enclave setup. During run time, the enclave can register custom trap handlers which are called by the SM before switching to the OS after an interrupt. Thus, the enclave can observe its own interrupt behavior and detect suspicious behavior caused by interrupt-based side-channel attacks [15,91]. Kernel-space Enclaves Our CURE prototype supports kernel-space enclaves with and without user space (Enc 3 and Enc 2 in Figure 2). We use an Linux LTS kernel 4.19, which currently on RISC-V does not support a suspension mode, as the enclave RT. Allocating resources. When an enclave is set up, the custom kernel module unmounts the driver modules of all peripherals requested by the enclave. Then, the SM performs the security-critical tasks of the enclave setup, as described in Section 5.2.3. When the enclave binary is successfully verified, the kernel module shuts down the core(s) reserved for the enclave using the Linux hotplugging mechanism. Next, a switch to the SM is performed which jumps to the entry point of the enclave RT in order to boot the RT on all reserved cores. At enclave shutdown, the SM performs the cleanup, and all freed cores are reintegrated into the OS. Then, the kernel module remounts the driver modules. Enclave-OS communication. Since our CURE prototype allows one memory region per enclave, access to a shared region needs to be requested at the SM which then assigns the shared region to the requesting party (sender). Once the sender is finished accessing the shared region, the SM assigns the shared region to the receiver and notifies the receiver about new data in the shared region using an inter-processor interrupt. In contrast to the user-space enclave, only external interrupts are trapped in the SM during kernel-space enclave execution which is enforced by configuring the medeleg and sedeleg registers during the enclave setup. All interrupts triggered by the enclave cores are handled by the RT. Hardware Security Primitives We describe next, how we modify the Rocket Chip to implement CURE's hardware security primitives (Section 5.3). Extending the TileLink Protocol We modify the Rocket core such that on every memory access, the eid register value is sent as part of the issued bus transaction. This also includes transactions issued by the PTW (page table walker) during the page table walk when performing address translations. Thus, if a malicious enclave modified its own page tables to point to a memory region outside of the enclave memory, the PTW transactions are blocked by the access control mechanisms on the system bus. TileLink [80] is the default bus protocol used on the Rocket Chip to connect on-chip components. TileLink specifies five channels (A -E). When connecting a parent to the system bus which contains an internal cache, all five channels are needed to implement the TileLink coherence protocol (TL-C). When a parent does not require cache coherency, only the A and D channels are needed (TL-UL/UH). In our RISC-V SoC, the Rocket cores and the DMA devices are connected over TL-C since they contain L1 caches. We extend the TileLink protocol by a 4-bit eid signal to propagate the enclave ID. The eid signal is only added to the A and C channels which transport the memory read and write transactions from the parents (CPU and DMA devices) to the system bus and childs (peripherals and main memory), respectively. All other channels remain unmodified. System Bus Access Control We implement CURE's access control mechanisms in the system bus by adding registers and control logic at the memory and peripheral arbiters and the ports connecting DMA devices. The hardware changes are shown in Figure 6, exemplary for a system containing two cores, one DMA device and multiple peripherals. All newly added components are connected to the control bus of the system and thus, are configurable by the SM over MMIO. We omit the control bus in Figure 6 for the sake of clarity. Our implementation supports enclave-to-peripheral binding and thus, fulfills FR.4. Moreover, in contrast to related work [20,23], all access control is performed in parallel to arbitration, thus, guaranteeing execution in a single clock cycle without incurring additional latency. Performing access control. The added registers hold memory ranges defined by a 32-bit base address (Addr) and a 32-bit mask (Mask), and are used by the control logic to perform access control on every memory transaction using the eid and address signals. Access control is only performed on channels with a parent-to-child direction (channels A and C). At access violation, the transaction is redirected (with all-zero data) to an unused, zero-initialized memory region. Thus, all forbidden transactions write/read zeros to/from the unused memory region. An adversary enclave might fill L1 with malicious data which could get flushed with SM privileges during enclave context switch. To prevent this, we modify the core such that on every switch to the SM, the L1 is flushed before the eid register is set. We connect the system bus to the peripheral and interrupt bus. This allows the SM to configure the added registers and control logic, and trigger an interrupt upon access violation which is handled by the SM. Memory arbiter. We add 15 registers to the memory arbiter, one for each enclave (13), the SM and the firmware. Each register defines the memory region assigned to each execution context. For the enclaves, the control logic verifies that transactions only target the assigned region. For the SM, no access control is performed. The OS is allowed to access all regions except the ones specified in registers of the arbiter. The firmware is allowed to access its own and the OS regions which is why a static ID needs to be assigned to the firmware. Peripheral arbiter. We add two registers per peripheral to the arbiter of the peripheral bus. One covers the MMIO region of the peripheral, and the other 32-bit register contains a bitmap that defines read and write permissions for every enclave. DMA port. We add a register at every port which connects a DMA device. In CURE, a DMA device is exclusively assigned to a single enclave at one point in time. In our prototype, a DMA device accesses the main memory but not other peripherals. If specific use cases, e.g. PCI peer-topeer transactions [67], must be supported, additional registers need to be added to specify multiple allowed memory regions. Together with the peripheral arbiter, this fulfills FR.2. L2 Cache Partitioning For cache side-channel resilience, we implement way-based flexible cache partitioning for the shared L2 (last-level) cache [81] in our prototype. We leverage the eid-extended TileLink memory transactions to detect when an enclave issues a cache request. Configurable partitioning. We implement two modes of partitioning to allow enclaves to individually enable cache side-channel resilience. The first mode CP-BASIC performs rudimentary access control where each enclave is only permit-ted to access (hit) its own cache lines, but is free to evict cache lines from other ways. The second mode CP-STRICT provides more stringent security guarantees by allocating exclusively one or more ways (across all cache sets) to the pertinent enclave. Only these cache ways can be accessed by the enclave to store or evict cache lines. This provides strict isolation between the cache resources of the different enclaves, thus, effectively blocking cache side-channel leakage, but reduces the cache resources available for the enclave. Depending on the enclave service requirements, the partitioning mode can be configured by the SM independently for each enclave at setup and during the enclave lifetime, thus, fulfilling FR.5. Access control. We extend each cache entry metadata with a 4-bit line-eid register encoding the owner enclave of the cache line, as shown in in Figure 6. We extend the cache lookup logic to generate a hit only when both tag as well as eid match for CP-BASIC, as opposed to usual tag matching. To support CP-STRICT, the cache ways directory is also extended with a 1-bit register excl that identifies whether each way is owned exclusively by an enclave, as well as a 4-bit eid register that identifies the owner enclave. The cache controller logic is augmented with a register-based lookup table that is indexed by the eid. It encodes with a single mode bit whether the corresponding enclave has CP-STRICT enabled and its allocated cache way indices. In CP-STRICT, cache hits are only allowed in these cache ways. Eviction and replacement. The L2 cache we use implements a pseudo-random replacement policy where any way is selected pseudo-randomly for eviction. We modify this to only select a way from the subset of ways allowed for each enclave. For enclaves with CP-STRICT, only ways exclusively allocated to it are used. For enclaves with CP-BASIC, all ways (except ways allocated exclusively to other enclaves) are used. Per-enclave cache allocation. Unallocated way indices are maintained in a register vector. If an enclave with CP-STRICT enabled requests to exclusively own cache ways, the required ways are allocated if available and below the allowed maximum per enclave. An inherent drawback of this partitioning technique is how the limited number of cache ways directly constrains the number of simultaneous enclaves that can have CP-STRICT enabled. However, this is only an implementation decision for our particular prototype, where more sophisticated cache designs [25,74,99] can be integrated into CURE. Security Considerations To protect from a strong software adversary, our instantiation of CURE must fulfill the security requirements introduced in Section 4.1. In the following section, we discuss how our prototype meets the requirements SR.1, SR.2, and SR.4, whereas we show the fulfillment of SR.3 in Section 8. Hardware Security Primitives (SR.2) The enclave protection is enforced by hardware SPs at the system bus and L2 cache which are configured over MMIO. After the system is powered on and on every switch to the machine level, the CPU jumps to the trap vector whose address is stored in the mtvec register. The trap vector is included into the SM such that the boot process and context switches are overlooked by the SM. The mtvec register is assigned to the SM by coupling the access permission to the SM enclave ID (stored in the eid register) which is also assigned to the SM. The eid register is set by hardware during the context switch into the machine level. During boot, the SM assigns the SP MMIO regions exclusively to its own enclave ID. Enclave Protection (SR.1) At rest, the enclave binaries are stored unencrypted in memory. However, during enclave setup, the SM verifies the binaries using digital signatures. Moreover, the L1 is flushed during setup/teardown to remove malicious or sensitive data from the cache. The communication between enclaves and the OS is controlled by the SM, so is the delegation of the shared memory address. Hardware-assisted hyperthreading is disabled during enclave execution. The enclave state, which is loaded during the setup process, is persistently stored by the SM using authenticated encryption, either in RAM as part of the SM state or evicted to flash/disk, and additionally rollback protected. During teardown, the SM removes all enclave data from the memory. The SPs in hardware perform access control on physical addresses at the system bus. Thus, CURE protects from adversaries in privileged software levels (PL2 -PL0) and from off-core adversaries, e.g. peripherals performing DMA. The enclave data cached in the L1 during run time is protected by flushing it on all context switches. Data in the L2 cache is protected by assigning cache lines exclusively to enclaves. Since no enclave (except the SM), has elevated rights on the system, CURE also protects from malicious enclaves. Side-channel Attack Resilience (SR.4) Cache side-channel attacks. Side-channel attacks which target data in core-exclusive cache resources, i.e., in the L1 [11], the BTB [50] or the TLB [31], are prevented by the SM by flushing the resources on all context switches. Side-channel attacks targeting data in the shared L2 cache [36,39,102] are prevented through strict way-based cache partitioning. Controlled side-channel attacks. Side-channel attacks on user-space enclaves which target page tables [65,92,101] are prevented by including the page tables into the enclave memory and by mapping all enclave code and data pages before execution. The SM verifies the page tables and the base address of the root page table stored in the satp register. The hardware SPs prevent the page table walker (PTW) from performing forbidden memory access during the page table walk. Side-channel attacks exploiting interrupts [91] can be mitigated using trap handlers (Section 5.2.2). CURE provides cryptographic primitives in the user-space enclaves to encrypt and integrity-protect data shared with the OS. However, using OS services over syscalls always comprises a remaining risk of leaking meta data information [2,77] or of receiving malicious return values from the OS [13]. In user-space enclaves, these attacks must be mitigated on the application level inside the enclave, e.g., by using data-oblivious algorithms [2,68] or by verifying the return values [13]. None of these attacks pose a threat to kernel-space enclave since all resources are handled by the enclave RT. However, on VM enclaves, the second level page tables need to be protected, as with user-space enclaves. Interrupt-based attacks can again be mitigated with custom trap handlers. No additional countermeasures are needed to protect the SM since the SM does not use a virtual address space or OS services and handles its own interrupts. Transient execution attacks. The discovered transient execution attacks either mistrain the branch predictor [14,43,45], rely on information leakage [89] or malicious injections [90] on the L1 cache, or rely on resources shared when using hardware-assisted hyperthreading [12,78,90,93,94]. By disabling hyperthreading during enclave execution (or alternatively assigning all threads to the enclave) and flushing core-exclusive caches, CURE protects enclaves against the known transient execution attacks. Evaluation In the following section, we systematically evaluate our CURE prototype. First, we quantify the software and hardware modifications required to implement CURE. Next, we evaluate the performance of CURE's enclaves using microbenchmarks, and the overall performance overhead of CURE using generic RISC-V benchmark suites. [44], thus, fulfilling SR.3. Note that since CURE isolates the SM in an own sub-space enclave, CURE can achieve a smaller TCB size than other RISC-V security architectures [22,48,98] which include all code in the machine level, i.e., the firmware code, in the TCB. In our implementation, the firmware code consists of 3286 LOCs. Thus, by isolating the SM in a sub-space enclave, we managed to cut the software TCB in half, where the actual management code is even less (15.56%). Protecting a sensitive service in a user-space enclave requires to add a small custom library (10KB) to the service binary. For the kernel-space enclaves, management code (the enclave RT) must be added in addition. In our prototype, we use the Linux LTS kernel 4.19 as the RT which increases the size of the service binary by 3MB. Custom RTs can further decrease this kernel-space enclave overhead. However, kernelspace enclaves will always have an increased binary size and memory consumption compared to user-space enclaves. Hardware overhead. We evaluate the hardware overhead of our changes by synthesizing the generated Verilog descriptions using Xilinx Vivado tools targeting a Virtex UltraScale FPGA device. Table 2 shows a breakdown of the individual area overhead of the different modifications required to implement CURE. Overhead is represented in look-up tables (LUTs), the fundamental programmable logic blocks of FPGA devices, and registers. We compare in Table 2 with a baseline configuration of 2 in-order Rocket cores (each with L1 cache). Extending the TileLink protocol throughout the system bus incurs a minimal overhead of 105 LUTs per core relative to the baseline (211 LUTs for 2 cores). This overhead includes propagating the eid in tandem with memory access transactions through the MMU of every core, and is thus replicated for every additional core in the system. Table 3: CURE performance overhead compared to a normal process on microbenchmarks in milliseconds. System Modifications In contrast, the rest of our modifications for performing access control at the system bus, including enclave-to-peripheral binding, are independent of the number of cores. Incorporating logic to perform access control for every MMIO peripheral utilizes an additional 248 LUTs, and 112 LUTs per DMA device. Each represent below 0.5% overhead relative to a dual-core baseline SoC. Integrating an L2 cache into our baseline setup utilizes an additional 30, 232 LUTs. Applying our on-demand way-based partitioning to this cache costs only 516 LUTs and 214 registers, which is 1.8% overhead relative to the L2 cache logic utilization itself, and 0.5% relative to the entire SoC. Our area overhead evaluation results demonstrate that the hardware modifications required to achieve our finegrained and customized enclave protection in CURE indeed incur minimal area overhead on both single-and multi-core architectures, thus fulfilling FR.3. Performance Evaluation We evaluate the performance of CURE using our FPGA-based setup coupled with cycle-accurate simulators. We conduct our experiments using micro and macro benchmarks for userspace and kernel-space enclaves, and compare them to unmodified user-space processes. We conduct 10 runs for each of the experiments. Microbenchmarks For microbenchmarks (Table 3), we measured important key aspects individually: setting up and tearing down an enclave, context switching with the OS, dynamic memory allocation, and communication via shared memory. We implement an application which performs the required tasks (without any additional logic) and run it as a normal Linux process, a userspace enclave and a kernel-space enclave (single core). The enclave setup is triggered by a host app in Linux which is the only purpose of the app. The enclave binary sizes therefore mainly correspond to the overhead produced by the enclave types, i.e., 10KB for the user-space enclave and around 3MB for the kernel-space enclave. For the enclave setup, our results show that most of the time (91.3% for user-space, 52.1% for kernel-space enclaves) is spent on binary verification. The Others measurement contains all remaining steps of the setup process, e.g., loading of the enclave binary, enclave configuration, flushing of the TLB and L1 cache and jumping into the enclave. During our evaluation, we use 32KB 8-way set associative L1 data and instructions caches and a TLB with 32 entries. The setup of the kernel-space enclave is more complex and includes additional setup steps, namely, freeing the core from pending processes, detaching the core from the OS, and booting the RT. In the teardown phase, zeroing the memory produces 39.9% of the overhead for the user-space and 45,7% of the overhead for the kernel-space enclave). The cleaning is more time consuming for the kernel-space enclave because of the larger enclave memory region. The Others measurement contains additional steps, e.g., exiting the enclave and flushing the TLB and L1 cache. In the kernel-space enclave case, the core must additionally be rebooted. As the RT in our prototype does not support a suspension mode (keeping the enclave in memory), we emulate the context switch to the OS by performing a teardown without zeroing memory, and the context switch from the OS by performing a setup phase without verifying the enclave binary. Suspending the enclave and restoring it should be faster than a regular shutdown and boot, thus, this represents a worst-case approximation. The context switching measurements also contain the overhead for flushing the TLB and L1 cache, for which we measure 28 cycles and 3141 cycles, respectively. As new entries to the page tables need to be verified by the SM, user-space enclaves have a higher overhead for dynamic memory allocation. In the kernel-space enclave case, all page tables are included in the enclave memory and thus, do not need to be verified. During communication, the OS can directly access a process's memory, whereas the user-space enclave needs to copy the data to be shared to the shared memory region. The kernel-space enclave additionally has to request the shared memory from the SM, and the OS needs to be notified by the SM using an inter-process interrupt. Macrobenchmarks To evaluate the performance overhead in realistic scenarios, we used three different benchmarking suites that stress single cores, multi-core setups with two cores under test, and how the enclaves influence an OS under load. Furthermore, we measure the performance impact of our L2 cache partitioning by assigning 1∕16 of the L2 cache to the enclave under test. Single-core benchmarks. For single-core performance, we evaluated CURE with the RISC-V benchmark suites rv8 [75] and CoreMark [26], which are commonly used for TEE architectures [22,48]. The results depicted in Figure 7 are normalized to a normal user-space process. We measured a geometric mean of 19.70% for user-space enclaves and 15.33% for kernel-space enclaves for the performance overhead. As shown in Table 3, kernels-space enclaves have an increased setup time which however, amortizes with longer enclave run times. Outliers like aes, norx and qsort are memory-intensive workloads that perform a large number of context switches to the OS, mainly for dynamic memory allocation. Performing context switches and dynamic memory allocation is more expensive for the user-space enclave since the SM must verify newly created page table entries and copy them to the enclave memory. During one run, we count 24,601 syscalls for aes, 24,602 syscalls for norx and 48,846 syscalls for qsort. We also measure the overhead for flushing the TLB and L1 on every context switch which is, however, only necessary for the user-space enclave. The flushing induces only a small overhead which makes up for 1.03%, 1.48% and 1.21% of the overall overhead for aes, norx and qsort, respectively. Table 4: Kernel-space enclave performance on multi-core stress-ng benchmark in seconds. Multi-core benchmarks. Since CURE allows to assign multiple core to a kernel-space enclave, we evaluated CURE also on the dedicated multi-core benchmark stress-ng [41]. The results in Table 4 show that multi-core kernel-space enclaves are practical by achieving almost the same performance as normal processes. Influence on OS. We stress the OS by running CoreMark, while starting an enclave in parallel. For the user-space enclave we use a single core, while two cores are needed for the kernel-space enclave, for which we simulate the suspension mode as in the microbenchmarks. For one core, the CoreMark running on the OS is slowed down by 0.519s (1.56%). For two cores with only one call after setting up the kernel-space enclave, the OS is slowed down by 0.792s (4.23%), showing that the kernel-space enclave has a higher performance impact Table 5. For our cycle-accurate experiments, we configure the core with 64KB 8-way set-associative L1 data and instructions caches and 2048KB 16-way set-associative shared L2 cache. The impact of way-based cache partitioning on performance is very application-dependent (besides the caches configuration and caches and main memory access latencies), as demonstrated by our experiments where the performance overhead ranges from a little under 0.2%, as for the prime benchmark, to a little over 9% for the bigint benchmark, for example. We measure a geometric mean of 3.09%. We note that the overheads reported are performance hits where the baseline is a best-case scenario where the only workload utilizing the cache resources (all 16 ways of the L2 cache) is the kernel-space enclave under test. Furthermore, we observe that performance significantly improves once more than 1 way is allocated per enclave, which is the likely scenario for enclaves that run applications with larger working sets and can benefit more from increased L2 cache resources. Related Work The existing works mostly related to CURE are TEE architectures which focus on modern high-performance computer systems. In contrast to capability systems or memory tagging extensions [30,82,88,95,100], TEE architectures protect sensitive services in security contexts (enclaves) against privileged software adversaries. We do not further discuss TEE architectures focusing on embedded systems [8,47,66,98]. We compare CURE to other TEE architectures in Table 6. All presented architectures provide a single type of enclave which, on an abstract level, resemble either the user-space or kernel-space enclaves provided by CURE. Intel SGX [64] offers user-space enclaves on Intel processors. The untrusted OS provides memory management and other OS services, e.g. exception handling, to the enclaves. SGX does not protect against cache side-channel [11,50] and controlled side-channel attacks [91,92,101]. Many extensions to SGX were proposed in order to mitigate side-channel attacks [1,2,7,15,69,79], however, these solutions are all ad-hoc approaches that do not fix the underlying design shortcomings of SGX, but instead leverage costly data-oblivious algorithms [1,2,7], or exploit not commonly available hardware in an unintended way [15,79]. Sanctum [22], which also provides user-space enclaves, addresses both, cache side-channels through page coloring, and controlled side-channels by storing the enclave page tables in the enclave memory, like CURE. However, page coloring is not practical as it influences the whole OS memory layout and cannot be efficiently changed at run time. CURE's cache partitioning instead allows dynamic assignment of cache ways, and also mechanisms to mitigate interrupt-based side-channel attacks. Sanctum and SGX only provide user-space enclaves which are inherently limited as they cannot provide secure I/O, but only protect from simple DMA attacks. Similar to SGX, AMD SEV [38], which isolates complete VMs in the form of kernel-space enclaves, does not consider any side-channel attacks. VM data in the CPU cache is protected by an access control mechanism relying on Address Space Identifiers which, however, does not protect against cache side-channel attacks. As the memory management and I/O services are provided by the untrusted hypervisor, SEV is also vulnerable to controlled side-channel attacks [65] and cannot provide secure peripheral binding [51]. ARM TrustZone [3] separates the system into normal and secure world, a single kernel-space enclave which does not rely on the OS and thus, is protected from controlled sidechannel attacks. TrustZone does not provide cache sidechannels protection, only by using additional hardware [106]. Further, TrustZone's major design shortcoming is providing only a single enclave, thus, sensitive services cannot be strongly isolated with TrustZone, hence, access to TrustZone is highly limited in practice by device vendors. Extensions building upon TrustZone mostly tried to enable multi-enclave support for TrustZone [10,18,33,85] with workarounds that either rely on ARM IP [10], block the hypervisor [18,33], or massively impact performance [85]. Since multiple enclaves were not considered in the TrustZone design from the beginning, even the proposed extensions cannot provide binding peripherals directly and exclusively to single enclaves. Keystone [48] provides kernel-space enclaves on RISC-V. Moreover, Keystone uses a cache-way based partitioning against cache side-channel attacks, comparable to CURE. However, Keystone provides a coarse-grained cache ways assignment per CPU core, whereas CURE assigns cache ways to enclaves with freely configurable boundaries. Thus, the Keystone design is limited to a single enclave type which prevents Keystone from isolating the firmware from the actual TCB and demands adapting the sensitive services to the predefined enclave. Moreover, in contrast to CURE, Keystone does not support enclave-to-peripheral binding. Table 6: Comparison of major TEE architectures with respect to provided enclave types, dyn. cache-side channel and controlledside channel resilience, and enclave-to-peripheral binding, i.e., MMIO/DMA protection with exclusive enclave assignment. • indicates full support, ◐ for support with limitations, ○ for no support, * if resilience can only be achieved through extensions. Conclusion We presented CURE, a novel TEE architecture which provides strongly-isolated enclaves that can be adapted to the functionality and security requirements of the sensitive services which they protect. CURE offers different types of enclaves, ranging from sub-space enclaves, over user-space enclaves, to self-sustained kernel-space enclaves which can execute privileged software. CURE's protection mechanisms are based on new hardware security primitives on the system bus, the shared cache and the CPU. We instantiate CURE on a RISC-V system. The evaluation of our prototype indicates minimal hardware overhead for the security primitives and a moderate overall performance overhead.
2020-11-02T02:00:33.564Z
2020-10-29T00:00:00.000
{ "year": 2020, "sha1": "3f800641a2b3bb9efa66bf6549942b0bd2a1bdda", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3f800641a2b3bb9efa66bf6549942b0bd2a1bdda", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14395110
pes2o/s2orc
v3-fos-license
Homicide–suicide and the role of mental disorder: a national consecutive case series Purpose There is a lack of robust empirical research examining mental disorder and homicide–suicide. Primary care medical records are seldom used in homicide–suicide research. The aims of this study were to describe the characteristics of offenders and victims; determine the prevalence of mental disorder and contact with mental health services and examine adverse events prior to the offence. Methods This was a mixed-methods study based on a consecutive case series of offences in England and Wales occurring between 2006 and 2008. 60 homicide–suicides were recorded. Data sources included coroner’s records, police files, General Practice (GP) and specialist mental health records, and newspaper articles. Results The results show that most victims were spouse/partners and/or children. Most perpetrators were male (88 %) and most victims were female (77 %). The incidents were commonly preceded by relationship breakdown and separation. 62 % had mental health problems. A quarter visited a GP for emotional distress within a month of the incident. Few had been in recent contact with mental health services before the incident (12 %). Self-harm (26 %) and domestic violence (39 %) were common. Conclusion In conclusion, GPs cannot be expected to prevent homicide–suicide directly, but they can reduce risk generally, via the treatment of depression and recognising the risks associated with domestic violence. Introduction Homicide-suicide is where an individual kills another person and then takes their own life. In the majority of cases the suicide occurs immediately. In our definition we have also included those who died within 3 days of the homicide, and cases where the offender was fatally injured at the time of the homicide, but died more than 3 days later. This is consistent with previous studies [1][2][3]. Over recent years, homicide-suicide has received increased academic attention, both empirically and theoretically, which has advanced our understanding of the phenomena. These acts are mostly carried out by men, the victims are most often female intimate partners or close family members, multiple victims are also common. Explanations why people commit these acts have been proposed and classified in numerous typologies. Motivations include jealousy and revenge following real or perceived infidelity and relationship breakdown, altruism or mercy killing, financial problems and mental disorder [1,4,5]. However, these typologies have their limitations and do not comprehensively capture the complexities of homicide-suicide. Reliable information on the role of mental disorder in these cases is lacking. In a review of the literature spanning 60 years, Roma et al. (2012) found 30 studies presenting clinical findings, 20 reporting depression and 11 reporting psychosis [6]. Of these studies, the definition of mental disorder used have resulted in a wide variation in the prevalence rates reported (18-75 %) [7,8]. Definitions of mental illness have included serious mental illness that required hospitalisation [7]; any treatment received from specialist mental health care services [2]; recorded history of mental illness documented by the police and/or medical examiners (National Violent Death Reporting System US) [9]; or having a post mortem diagnoses derived from medical records and/or informant descriptions of the deceased's mental state (psychological autopsy) [8]. Whilst these studies have provided valuable clinical insight, they are often based on subgroups such as filicide-suicide [10]; older couples [11]; or intimate-partner homicide-suicide [12][13][14][15]. Furthermore, the findings are commonly based on small regional samples [16][17][18]. Therefore due to the difficulties with data collection, national clinical studies of homicide-suicide are rare. Furthermore, mental disorder diagnosed by primary care and recorded in general practice medical records are seldom accessed. In this study, we aimed to further our understanding of homicide-suicide by firstly describing the characteristics of offenders and victims. Secondly, we aimed to examine the clinical antecedents of these incidents (i.e. history of mental disorder, contact with mental health services, diagnosis, medication prescribed). Thirdly we aimed to examine adverse events prior to the offence (i.e. relationship breakdown and child custody disputes). The low incident rate of homicide-suicide in England and Wales of 0.05 per 100,000 population (approximately 23 incidents per year) enabled a detailed exploration of the complex aetiology of these cases. Therefore, this is the first study to complete an in-depth examination using multiple data sources on a national consecutive case series of homicide-suicide. Research design A mixed-method design was used to examine data from a national consecutive case series of homicide-suicides in England and Wales between 1st January 2006 and 31st December 2008. The case series was accessed via the National Confidential Inquiry into Suicide and Homicide by People with Mental Illness (NCISH) [3]. NCISH collate and maintain clinical data on homicide and suicide not replicated by any other national or international research group or organisation. Access to this data makes these finding unique. Additional information for both the qualitative and quantitative phases of the study were collected and analysed concurrently (i.e. both methods were used to explore the same research questions at the same time) [19]. Sample: selection of participants A total of 83 people were initially suspected of carrying out a homicide-suicide over the study period. Twenty-three were excluded for reasons including; the perpetrator took their own life more than 3 days after the homicide (18 offenders); in 4 cases the coroner did not return an unlawful killing or suicide verdict; and in 1 case the inquest had not taken place at the time of analysis, and therefore a verdict had not been determined. The final sample consisted of 60 cases. Data collection Cases were identified using the existing NCISH homicidesuicide database. A detailed description of methodology for recording homicide-suicide incidents has been described previously [2]. To briefly summarise, notification of homicide-suicide incidents was received from the Home Office Statistics Unit of Home Office Science in most cases (92 %). The remaining cases not recorded on the homicide index were notified to NCISH via individual police forces. Data held on the NCISH database included demographic characteristics of the offender and victim, offence details and information regarding diagnosis and contact with mental health services. Once the offender had been identified, additional information was requested and obtained from coroners' files and police records (82 % of cases), General Practice (GP) medical records (88 % of cases) and newspaper articles (90 % of cases). The files contained antecedent information describing the events, circumstances and mental state of the offender leading up to the incidents which were specified in witness statements by friends and family members. GP medical records were examined to determine recent attendance, history of mental disorder, recent symptoms and medication prescribed. Triangulation of these data sources was undertaken to ensure the validity of responses. In instances of non-convergence between data sources, inconsistencies were reconciled by reviewing the evidence from all sources, discussing this amongst the authors and determining the most reasonable finding. Quantitative data analysis Data analysis was conducted using Stata version 11. Rates were calculated using ONS mid-year population estimates. Results were reported using 95 % confidence intervals. If an item of information was not known for a case, the case was removed from the analysis of that item; the denominator in all estimates was the number of valid cases for each item and indicates the number of missing cases per item. Qualitative data analysis Documentary analysis was undertaken in accordance with Hodder [20]. The content of the documents such as the coroner's report, police investigation reports and witness statements were explored using framework analysis [21]. This systematic and comprehensive approach was undertaken in 5 key stages; familiarisation with the content of the documents; identifying an initial thematic framework; indexing or coding the data; charting the salient text to the framework; mapping themes and interpreting the meaning of these themes. An iterative approach was applied whereby the themes were repeatedly refined until saturation point was reached and no new themes emerged. This method was preferred to the alternative of mapping to existing typologies [1,5] as it enabled the generation of themes from our unique national consecutive case-series. Coding was undertaken by SF and the thematic framework was refined by SF, LG and JS during the iterative process. The characteristics of perpetrators and victims During the 3 year study period 60 people, 4 % of offenders took their own life after committing homicide, a rate of 0.04 per 100,000 population. The characteristics of offenders and victims are presented in Table 1. Most of the offenders were male. The median age was 44. Half were married or cohabiting. The majority were white. Almost a third had a conviction for violence, including 3 offenders who had previously been convicted of homicide. The victims of the previous homicides were a former partner in 2 cases and an acquaintance in the remaining case. Over a third of offenders were found to have previously committed domestic violence. There were 70 victims in total, the majority of whom were female. The median age of the victims was 38. Nearly two-thirds of victims were the offender's spouse/partner (current or ex), of these 19 (45 %) had been in an intimate relationship for over 20 years. Twenty (29 %) children were killed by a parent (filicide) and 2 perpetrators killed a spouse/partner (current or ex) and children (familicide). In total, 6 offenders killed more than 1 victim. The highest number of victims killed in single incident was 5. Method of homicide and suicide In the majority of incidents the offender took his or her own life within 24 h of committing homicide and this occurred most often in home shared by the offender and victim. The most common method of homicide was by sharp instrument, and hanging was the most frequently used method of suicide. The same method was used in both homicide and suicide in 24 (40 %) cases (Table 1). Adverse events prior to the homicide-suicide The most common circumstances leading to the individual's emotional distress was the loss of a close personal relationship either through imminent separation or divorce; or a significant change in the relationship due to the victim's ill health (e.g. dementia). Other adverse events included problems with children (child custody/access problems or the perpetrator's belief that they were failing as a parent); financial problems; and adjusting to a new social situation (either because they were a recent immigrant or a recently released prisoner). Most offenders had previously exhibited difficulty coping with emotional distress, resulted in violence and aggression or self-harm. Five (9 %) had previously been bereaved by the suicide of a family member or close friend. In 1 case the offender experienced the loss within 2 weeks of the homicide-suicide, in the remaining cases the deaths occurred more than 10 years before the homicide-suicide. History of mental disorder prior to the offence Fifty-six (93 %) offenders were registered with a GP practice. Data from GP medical records were obtained on 53 (88 %) offenders. The following analysis is based on those 53 cases on whom data were available (Table 2). Thirty-three (62 %) had previously been diagnosed with a mental disorder. The most common diagnosis was depression, psychosis was rare, and none of the offenders had been diagnosed with personality disorder. Nearly a third had been prescribed psychotropic medication at the time of the homicide-suicide, mostly antidepressants. A quarter had previously attempted suicide between 1 and 4 times. Overall, suicidal ideation was noted by a GP in 7 cases (14 %). Three quarters of the perpetrators had attended their GP within 12 months of the offence, 40 % within a month. A third of offenders were recorded as having discussed psychological problems with their GP within a year of the offence, a quarter within a month. A response to our inquiry regarding contact with specialist mental health services was received on all 60 offenders. The majority (46, 77 %) had no previous contact with mental health services. Of the fourteen (23 %) who had a history of contact 7 were former patients and 7 had been under mental health care within 12 months of the offence, most commonly for depression. Ten (19 %) had previously been admitted as an in-patient (Table 2). Classifying homicide-suicide The homicide-suicide offenders diverge into two groups, firstly offenders with a history of depression and secondly Discussion This is the only study to our knowledge to have undertaken an in-depth examination of homicide-suicide and mental disorder on a national case-series using data from primary care and specialist mental health services. The use of information from a variety of robust data sources and utilising a mixed method research design makes this study unique. We found that perpetrators of homicide-suicide in this study were commonly middle-aged white males, who recently experienced a relationship breakdown. Mental disorder, particularly depression was common. Almost two-thirds of those on whom medical records were obtained had previously been diagnosed with a mental disorder. Although the overall proportion with mental disorder was high, the majority did not have serious mental illness requiring care under specialist mental health services. This is consistent with NCISH data on convicted homicide offenders (10 %) but was lower than the proportion of people who died by suicide in recent contact with mental health services in England (28 %) [3]. It was more common for the offender to be seen by primary care services. Contact with GP services within a year of the incident (77 %) and within a month (40 %) was remarkably similar to the proportion of service contact in deaths by suicide in the general population, 77 % and 45 % respectively [22]. Almost a third had been prescribed antidepressants at the time of the offence but we were unable to confirm if medication was being taken as instructed. Non-adherence is estimated to occur in 50 % of patients prescribed antidepressants, which can lead to an increased risk of adverse symptoms [23,24]. Domestic violence was found to be an important feature of these cases, with over a third of offenders having previously assaulted a partner, similar to previous studies [25,26]. However, few of these offenders had a history of mental disorder. This contrasts with findings from a US psychological autopsy study that found most offenders had a history of domestic violence as well as depression [27]. We recognise that personality disorder was likely to be under reported in this case series, as the diagnosis was not recorded in either the primary care or mental health service medical records. Knoll and Hatters-Friedman (2015) reported 17 % of perpetrators had antisocial personality disorder, but their sample size was limited (n = 18) [27]. It is challenging to identify personality traits retrospectively in a deceased offender. A potential solution would be use document-derived assessments instruments such as the PAS-DOC [28] to examine the possible association between personality disorder and homicide-suicide. Recognising and treating patients at risk We found the proportion of people who committed homicide-suicide and had a history of mental illness was consistent with previous studies using a broadly similar methodological approach [29,30]. However, the proportion was much higher than reported in US samples [9,25]. The literature shows that these acts are commonly carried out by middle-age men, which supports the evidence that this is an emerging high risk group for suicide [31]. Personal loss through relationship breakdown at this stage in life has been shown to be an important trigger in these incidents [32], but the causes of homicide-suicide extend beyond emotional stress arising from relationship difficulties (which are not uncommon problems). The key factor associated with homicide-suicides is the individual's lack of resilience and inability to cope with stressful events, evidenced by their reaction to previous similar experiences, either responding violently towards themselves or other people. Despite this recorded history of high risk behaviour and emotional instability these factors were not commonly explored by GPs through routine enquiry. Previous research has highlighted a reluctance to discuss emotional problems with patients in fear of opening 'Pandora's box' [33]. However, such discussions are necessary to differentiate between emotional distress and a diagnosis of mental disorder requiring referral to specialist mental health services. Limitations There are inherent difficulties in using retrospective data and documents originally generated for non-research purposes that can introduce bias. However, given the low base rate of these events and the fact that both parties involved are deceased, data are often limited and difficult to obtain. We carried out a retrospective analysis of medical records to determine whether the offender had been diagnosed with any mental disorder. The shortcomings identified in using diagnoses recorded in official documents to measure mental illness were; (1) the true prevalence of mental illness may be underestimated as medical help is not always sought by people experiencing mental health problems; (2) the reliability of the recorded diagnoses has not been verified and there are likely to be inconsistencies between cases; (3) it is not known whether the diagnostic data in this study accurately reflects actual symptom-based diagnoses in a reliable and valid way. There are also difficulties in determine a person's mental state at the time of offence based on descriptions provided in documents. This can only be achieved reliably through interviews and a psychiatric assessment of the individual following the offence, which is not possible with homicide-suicide offenders. In addition, due to the methodology used in this study we cannot state that there was a causal relationship between mental illness and the offence. We have therefore referred to the individual's history of mental illness or depression, and acknowledge that although a person may have a history of mental illness, they may not have had active symptoms at the time of offence, or impaired functioning. Despite the shortcomings, we consider this to be the most robust method available for defining mental illness in this population. Corroborating the findings from numerous sources of data through triangulation provided validity and rigour to the findings and helped to enhance our understanding of these events. Clinical implications It would be unrealistic to expect GPs or psychiatrists to prevent homicide-suicide directly, as these incidents are relatively rare and few clinicians will ever experience this phenomenon. There are however, improvements in service delivery that could help to reduce the number of incidents. For example the recognition and better treatment of mental disorder, particularly depression in primary care is achievable. Previous research has found that people with serious mental health problems, particularly major depression, are more likely to visit their GP, which demonstrates help seeking behaviour and a willingness to engage [34,35]. It is of course more difficult to recommend preventative action for people not under the care of health services. Our findings suggest that only a quarter of the offenders who experienced emotional distress contacted their GP for psychological support in the weeks before the incident, the majority did not. Previous research has shown that people may be reluctant to engage with services due to self-stigma (personal views about mental illness) [36] and the public stigma (society's perception of mental illness) [37]. Therefore, it is important to promote campaigns to reduce stigma and to raise awareness of the wide range of support available and how this can be accessed. Initiatives in the voluntary sector such as State of Mind are a good example of how this can been achieved. The organisation raises awareness of mental health issues through sport, working to improve the mental health of professional and amateur rugby league players, the wider community and students at local colleges [38]. Secondly, our data has shown a small proportion of homicide-suicide offenders had mental illness and a history of committing intimate partner violence. A recent meta-analysis has shown an association between domestic violence perpetration and mental illness [39]. Men with depression were shown to have an almost 3-fold increase risk of committing intimate partner violence. However, the evidence base is currently insufficient, and more research is required to identify risk factors which will inform prevention strategies. Increasing our knowledge of risk factors in this population will help to reduce the risk of future incidents. Ethical considerations The study received MREC approval on 9th April 2008 and is registered under the Data Protection Act. Exemption under section 251 of the NHS Act 2006 (formerly section 60 of the Health and Social Care Act 2001) was obtained enabling access to confidential and identifiable information without informed consent in the interest of improving patient care (approved 23rd October 2008). Research governance approval was sought from 49 Primary Care Trusts in England and Wales. The study was registered under the Data Protection Act (1998). The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.
2017-08-02T18:11:37.680Z
2016-04-16T00:00:00.000
{ "year": 2016, "sha1": "6c486fd500210a3fe162c631f7ff99d097d92cdc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00127-016-1209-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "06164bd090d997e253113c6f75e931bd32e38a94", "s2fieldsofstudy": [ "Law", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
224811272
pes2o/s2orc
v3-fos-license
Effects of cLFchimera peptide on intestinal morphology, integrity, microbiota, and immune cells in broiler chickens challenged with necrotic enteritis Three hundred and sixty 1-day-old male broiler chicks were randomly allocated to 4 treatments of 6 replicates to evaluate the effects of cLFchimera, a recombinant antimicrobial peptide (AMP), on gut health attributes of broiler chickens under necrotic enteritis (NE) challenge. Treatments were as follows: (T1) unchallenged group fed with corn-soybean meal (CSM) without NE challenge and additives (NC); (T2) group fed with CSM and challenged with NE without any additives (PC); (T3) PC group supplemented with 20 mg cLFchimera/kg diet (AMP); (T4) PC group supplemented with 45 mg antibiotic (bacitracin methylene disalicylate)/kg diet (antibiotic). Birds were sampled for villi morphology, ileal microbiota, and jejunal gene expression of cytokines, tight junctions proteins, and mucin. Results showed that AMP ameliorated NE-related intestinal lesions, reduced mortality, and rehabilitated jejunal villi morphology in NE challenged birds. While the antibiotic non-selectively reduced the count of bacteria, AMP restored microflora balance in the ileum of challenged birds. cLFchimera regulated the expression of cytokines, junctional proteins, and mucin transcripts in the jejunum of NE challenged birds. In conclusion, cLFchimera can be a reliable candidate to substitute growth promoter antibiotics, while more research is required to unveil the exact mode of action of this synthetic peptide. AMP production. cLF chimera was derived from camel lactoferrin (cLF) consisting of 42 amino acids and has primary sequence of DLIWKLLVKAQEKFGRGKPSKRVKKMRRQWQACKSSHHHHHH. In addition, the results of our previous study showed that cLFchimera had antioxidant activity (IC50: 310 μ/ml) and its activity was not affected after 40 min of boiling 17 (for more details regarding production process, please review previous papers 12,13,17 ). Briefly, preparation of recombinant plasmid vector was conducted through transforming recombinant expression vector harboring synthetic cLFchimera into DH5α bacterium 12,13 . Next, the latter bacterial colonies were cultured to harvest plasmid extraction. The recombinant vector was then transferred into E. coli BL21 (DE3) as an expression host and cultured in 2 ml Luria-Bertani broth (LB) medium for overnight according to standard protocol 18 . In the next step, cultured materials were inoculated in 50 ml LB and incubated at 37 °C with shaking at 200 rpm. Then, isopropyl-β-d-thiogalactopyranoside (IPTG) was added to a final concentration of 1 mM and incubated at 37 °C for 6 h after IPTG induction. Periplasmic protein was collected at different times after IPTG induction (2, 4, and 6 h) according to the method described by de Souza Cândido et al. 19 and analyzed on 12% SDS-PAGE. To purify expressed peptide, Ni-NTA agarose column was used based Table 1. Composition of experimental diets. 1 Antibiotic (45 mg bacitracin methylene disalicylate/kg diet) and peptide (20 mg/kg diet) were added on top and thoroughly mixed. 2 Added per kg of feed: vitamin A, 7,500 UI; vitamin D3 2100 UI; vitamin E, 280 UI; vitamin K3, 2 mg; thiamine, 2 mg; riboflavin, 6 mg; pyridoxine, 2.5 mg; cyanocobalamin, 0.012 mg, pantothenic acid, 15 mg; niacin, 35 mg; folic acid, 1 mg; biotin, 0.08 mg; iron, 40 mg; zinc, 80 mg; manganese, 80 mg; copper, 10 mg; iodine, 0.7 mg; selenium, 0.3 mg. Ingredient (%) 1 Starter (0-10 days) Grower ( www.nature.com/scientificreports/ on manufacturer's instruction (Thermo, USA). The quality of purified recombinant peptide was assessed on a 12% SDS-PAGE gel electrophoresis, while the Bradford method was used to analyze the quantity of recombinant peptide. More recently, an E. coli expression system was developed in our laboratory that is able to produce 0.42 g/L of recombinant peptide. Finally, four grams of peptide previously obtained from the recombinant E. coli were purified, lyophilized, and thoroughly mixed with 1 kg soybean meal and then supplemented to the relevant experimental diets. NE challenge. A previously described method of inducing NE was used with some modifications 20 . Briefly, on day 16, chicks in NC group were administered a single 1 ml oral dose of sterile phosphate-buffered saline (PBS) (uninfected) as a sham control, while PC, peptide and the antibiotic groups were orally inoculated with 5,000 attenuated vaccine strain sporulated oocysts each of E. maxima, E. acervulina, and E. tenella (Livacox T, Biopharm Co., Prague, Czech Republic) in 1 mL of 1% (w/v) sterile saline. On days 20 and 21, birds in the NE groups were orally inoculated with 1 ml of broth containing C. perfringens isolated from broilers meat 21 , CIP (60.61) containing 10 7 cfu/ml in thioglycollate (Thermo-Fisher Scientific Oxoid Ltd, Basingstoke, UK) supplemented with peptone and starch. Inoculated C. perfringens was analysed using PCR to confirm the presence of netB gene required for inducing NE in broilers according to Razmyar et al. 22 . The unchallenged group received the same dose of sterilized broth. Growth performance. On days 10 and 22, average daily gain (ADG), average daily feed intake (ADFI) and feed conversion ratio (FCR) were calculated using body weight (BW) and the weight of feed which remained in each pen. The ADG, ADFI and FCR were calculated over the specific and comprehensive periods of the study (0-10, 11-22, and 0-22 days). The feed conversion ratio for each period was readjusted based on the mortality data per pen per day, if any. Sample collection and lesion score. On days 10 and 22, two birds from each pen (12 birds/treatment) were randomly selected for euthanasia by cervical dislocation. The viscera was excised from the specimens, the intestine discretely separated and the adherent materials were precisely removed. The ileum was gently pressed to aseptically collect ileal content into sterile tubes for microbiological analysis. A section of approximately 5 cm from mid-jejunal tissues was meticulously separated for morphological analysis. A 2 cm section from the mid-jejunum was detached, rinsed in cold phosphate-buffered saline (PBS), immediately immersed in RNAlater (Qiagen, Germantown, MD), and stored at − 20 °C for subsequent gene expression determination. On day 22, NE lesions of duodenum, jejunum, and ileum from 2 birds per pen were scored on a scale of 0 (none) to 6 (high) as described previously 23 . Intestinal morphology. The method used to prepare samples for morphometry analysis was previously described by Daneshmand et al. 24 . Briefly, jejunal samples were stored in a 10% formaldehyde phosphate buffer for 48 h. The samples were then processed on a tissue processor (Excelsior AS, Thermo Fisher Scientific, Loughborough, UK), fixed in paraffin using an embedder (Thermo Fisher Histo Star Embedder, Loughborough, UK), and cut with a microtome (Leica HI1210, Leica Microsystems Ltd., Wetzlar, Germany) to a length of 3 cm per slice. The slices were placed on a slide and dehydrated on a hotplate (Leica ASP300S, Leica Microsystems Ltd., Wetzlar, Germany) and dyed with hematoxylin and eosin. Finally, the dyed slices of jejunal were examined under a microscope (Olympus BX41, Olympus Corporation, Tokyo, Japan). A total of 8 slides were prepared from the jejunal segment per bird, and ten individual well-oriented villi were measured per slide (80 villi/bird). The average of each measurement per sample was reported as the respective a mean for each bird. Villus width (VW) was measured at the base of each villus; villus height (VH) from the top of the villus to the villus-crypt junction, crypt depth (CD) from the base of the adjacent villus to the sub-mucosa, the ratio of VH/CD and villus surface area were calculated. Microbial count. The methods used to count the populations of E. coli, Clostridium spp., Lactobacillus spp., and Bifidobacterium spp. in the ileal content were described elsewhere 25 . In summary, the ileal contents of a sample were thoroughly mixed, serially diluted tenfold from 10 −1 to 10 −7 with sterile PBS, and homogenized for 3 min. Next, dilutions were plated on different agar mediums. Regarding the enumeration of bacteria, Lactobacillus spp. and Clostridium spp. dilutions were plated on MRS agar (Difco, Laboratories, Detroit, MI) and SPS agar (Sigma-Aldrich, Germany) and anaerobically cultured at 37 °C for 48 h and 24 h, respectively. Black colonies of Clostridium spp. on SPS agar were counted. MacConkey agar (Difco Laboratories, Detroit, MI) and BSM agar (Sigma-Aldrich, Germany) were used to cultivate E. coli and Bifidobacterium spp. respectively, and incubated at 37 °C for 24 h. All microbiological analyses were performed in triplicate, and average values were used for statistical analyses and results were expressed in colony-forming units (Log 10 cfu/g of ileal content). RNA extraction and gene expression. The procedure of RNA extraction and gene expression was previously explained 26 . In summary, total RNA was extracted from chicken jejunum sampled on day 22 using the total RNA extraction kit (Pars Tous, Iran) following the manufacturer's instructions. The purity and quality of extracted RNA were evaluated using an Epoch microplate spectrophotometer (BioTek, USA) based on 260/230 and 260/280 wavelength ratios, respectively. Genomic DNA was removed using DNase I (Thermo Fisher Scientific, Austin, TX, USA). The complementary DNA (cDNA) was synthesized from 1 µg of total RNA using the Easy cDNA synthesis kit (Pars Tous, Iran) following the manufacturer's protocol. 27 . Each reaction was performed in a total volume of 20 μl in duplicate using an ABI 7300 system (Applied Biosystems, Foster City, CA) and 2 × SYBR Green Real Time-PCR master mix (Pars Tous, Iran). Primer details are shown in Table 2. All primers were designed according to MIQE criteria 27 regarding amplification length and intron spanning. All efficiencies were between 90 and 110%, and calculated R 2 was 0.99 for all reactions. The method 2 −ΔΔCt Ct was used to calculate relative gene expression in relation to the reference genes 28 (GAPDH and β-actin). Statistical analysis. Data were statistically analyzed in a completely randomized design by ANOVA using the General Linear Model (GLM) procedure of SAS (SAS Inst., Inc., Cary, NC). Tukey's test was used to compare differences among means of treatments, and P values < 0.05 were considered to be significant. Results Lesion score and mortality. Table 3 shows the effects of experimental treatments on NE-inducing lesion scores in different segments of the intestine and mortality rate of broiler chickens. The results showed that none of the additives could rehabilitate the NE-inducing lesions in the intestine compared to NC. While the highest lesion scores in the intestine of PC group showed that NE was efficiently induced in broilers, AMP decreased (P < 0.05) lesions in duodenum, jejunum, and ileum of broilers compared to the challenged group. Inducing NE in broilers increased (P < 0.05) mortality, while birds fed with peptide showed lower (P < 0.05) mortality rate compared to PC group and interestingly had similar results to NC group. Table 2. Sequences of primer pairs used for amplification of target and reference genes. 1 1 For each gene the primer sequence for forward and reverse (5´ → 3´), the product size (bp), and the annealing temperature (Ta) in °C are shown. 2 ANXA1, annexin A1; TRAF3, tumor necrosis factor receptor associated factor 3; MUC2, mucin2; CLDN1, claudin1; OCLDN, occludin; GAPDH, Glyceraldehyde 3-phosphate dehydrogenase. Table 4 represents the effects of experimental diets on growth performance of broilers. While the antibiotic improved (P < 0.05) birds' ADG at first 10 days of age, peptide showed the best FCR at the end of starter period when no challenge was induced yet. At d 22, NE challenge decreased (P < 0.05) ADG and increased (P < 0.05) feed intake, which devastated (P < 0.05) FCR in broilers. In the latter phase, peptide showed similar effects to NC group, while improved (P < 0.05) performance indices compared to PC group. Jejunal villi morphology. The effects of treatments on jejunal morphology are shown in Table 5. On day 10, experimental diets had no significant effects on the morphometry of the intestine. NE challenge significantly demolished villi structure and morphology, while AMP enhanced (P < 0.05) villus height, width, and surface area (VSA) compared to PC group and had a similar effect to NC group at 22 days of age. Experimental diets had no significant effects on CD and VH/CD at d 22. Although the antibiotic improved (P < 0.05) villi morphology compared to NE-challenged birds, it could not restore villi characteristics to those of NC. Bacterial colonization. Table 6 summarizes the effects of experimental diets on ileal bacterial populations. At d 10, the antibiotic decreased (P < 0.05) the population of all bacteria compared to other groups. NE challenge increased (P < 0.05) the count of E. coli and expectedly Clostridium spp. and also decreased (P < 0.05) the number of Lactobacillus spp. and Bifidobacterium spp. in the ileum of birds. In the same period, the antibiotic had the lowest (P < 0.05) population of all cultured ileal bacteria compared to both PC and NC groups. Interestingly, AMP had similar effects to NC group, while it increased (P < 0.05) the population of Lactobacillus spp. and Bifidobacterium spp. and also decreased (P < 0.05) the colonization of E. coli and Clostridium spp. in the ileum of chickens compared to the challenged birds at 22 days of age. Gene expression of immune cells and tight junctional proteins. The effects of treatments on the expression level of immune and tight junction genes are presented in Fig. 1. While NE challenge increased (P < 0.05) TRAF3 and ANXA1 transcripts, adding the antibiotic and AMP to the diet reduced (P < 0.05) the www.nature.com/scientificreports/ expression of immune genes compared to PC group and the antibiotic had similar effects to those of nonchallenged birds. While NE challenge increased the expression of MUC2 in the jejunum of birds, AMP decreased (P < 0.05) the level of MUC2 trasncripts compared to PC group and had similar levels to NC group. Birds challenged with NE had the lowest expression levels of CLDN1 and OCLN genes in their jejunum. AMP increased (P < 0.05) expression patterns of CLDN1 in the jejunum of birds compared to PC group and had similar effects to those of non-challenged birds. Although birds fed with AMP had higher (P < 0.05) OCLN mRNA in the jejunum compared to PC group, this group had lower (P < 0.05) expression of this transcript compared to NC group. The antibiotic did not affect the level of tight junctions transcripts in the jejunum of NE challenged birds compared to both NC and PC groups. Discussion Necrotic enteritis is still a global concern with drastic losses in poultry farms, mainly due to retarded growth performance, increased mortality, and veterinary costs 23 . The outbreak of disease and consequently economic losses have been more prominent in the post-antibiotic era 23 . Recently, research focus has shifted to AMPs due to their beneficial roles on health attributes and their prophylactic effects against pathogenic invasion 12,13,17 . Therefore, the principal objective of the current study was to investigate the effects of antimicrobial peptide, cLFchimera, on various productive and health parameters in chickens challenged with NE. Results of the current study showed that AMP decreased gut lesions and mortality induced by NE. Additional benefits include also improved growth attributes in challenged chickens similar to birds fed antibiotic, confirming the results of previous studies 9, 10 . While most of the previous researches have studied the effects of AMPs in chickens in normal conditions, Hu et al. 29 demonstrated that supplementing broilers diet with AMP improved their weight gain and FCR under heat stress. In another challenge study, Wu et al. 30 inoculated weanling pigs with E. coli and supplemented the diet with AMP. The autors reported that AMP reduced the incidence of diarrhea and improved weight gain and FCR compared to the challenged group, which is similar to the present findings regarding the reduction in gut lesion and improvement of performance. Previous studies attributed the beneficial effects of AMPs on growth performance of chickens to their fundamental roles in maintaining microbial balance in the gut and consequently improvement in the intestinal morphometry 9, 10 . It has been well-documented that the villi play the critical roles in absorbing nutrients from the intestinal tract; consequently the morphometry of these villi can drastically affect the host's performance and health 31 . The present study confirmed, AMP significantly improved the morphometry of villi in the jejunum of challenged chickens, similar to that of NC group 9,10 . It has been reported that AMPs extracted from pig intestine 32 and rabbit sacculus rotundus 33 enhanced jejunal villi characteristics in broiler chickens, which is in line with the present results. Generally, the critical strategy in maintaining villi structure in an infectious disease like NE is the elimination or minimization of the pathogens through provision of antimicrobial additives and manipulation of the intestinal microbiome 34,35 . Previous studies showed that the antibiotic and AMPs could improve villi morphology and nutrient absorption and consequently increase growth performance in chickens under disease conditions by manipulating the intestinal microflora 9,10,17 . The intestinal commensal microbiome interacts with the host through different processes, including nutrients absorption, villi morphology, intestinal pH, and mucosal immunity 36,37 . In the current study, the supplementation of the antibiotic reduced the colonization of all bacteria, while AMP significantly enhanced the microflora balance in the ileum. In agreement with the present study, Tang et al. 7 and Ohh et al. 38 reported that AMPs significantly enhanced the microflora balance in the ileum of piglets and broilers, respectively. The antimicrobial action of Bacitracin Methylene Disalicylate (BMD) involves blocking the bacterial ribosome subunits and subsequently impeding protein synthesis, which finally reduces the colonization of the microbial community in the intestine 39 . Unfortunately, this antibiotic does not differentiate between commensal vs. pathogenic bacteria and may perturb microbial balance in the intestine and deprive the host of benefits of microbes' roles and products 39,40 . There is no consensus on the mechanism by which AMPs influence bacterial colonization in the intestine, while two Table 6. Effects of treatments on ileal microflora (log 10 CFU g -1 ) in broilers at 10 and 22 days of age. a-c Values within a column with different letters differ significantly (P < 0.05). 1 NC: negative control group received cornsoybean meal diet without challenge and additives; PC: positive control group received NC diet experimentally challenged with necrotic enteritis; AMP: PC received group supplemented with 20 mg antimicrobial peptide/ kg diet; Antibiotic: PC received group supplemented with 45 mg antibiotic (bacitracin methylene disalicylate)/ kg diet. 2 SEM: standard error of means (results are given as means (n = 12) for each treatment). www.nature.com/scientificreports/ direct and indirect mechanisms have been proposed based on the physiological properties of peptides. The direct antimicrobial effect of AMPs has been attributed to different surface charges of peptides and pathogens 41 . In other words, AMPs possess positive charges contributing to electrostatically adhere to negatively charged bacterial Samples were analyzed using qPCR, and GAPDH and β-actin were used as the reference genes. Abbreviations as follows: ANXA1, annexin A1; TRAF3, tumor necrosis factor receptor associated factor 3; MUC2, mucin2; unchallenge, control birds received a corn-soybean meal basal diet without AMPs, antibiotic and necrotic enteritis (NE) challenge; challenge, control birds challenged with NE; peptide, birds challenged with NE and supplemented with 20 mg cLFchimera/kg diet; Antibiotic, birds challenged with NE and supplemented with 45 mg antibiotic (bacitracin methylene disalicylate)/kg diet; The letters on the bar mean show significant difference (P < 0.05). In the indirect mode, AMPs might manipulate the microbial community of the intestine in favor of the colonization of beneficial bacteria (e.g. Lactobacillus spp. and Bifidobacterium spp.) and enhance the host health through various physiological mechanisms (e.g. competitive exclusion, secretion of short-chain fatty acids, activation of intestinal immune system, etc.) 42 . Previous findings suggested that cLF36 could attach to the bacterial membrane through electrostatic interactions and physically disrupt bacterial bilayer membranes 12,13,16 . In line with the previous reports 44 , the current results demonstrate that AMP can selectively prevent the bacterial growth in the intestine of NE challenged chickens, which may prove the competitive advantage of cLFchimera compared to antibiotics. Furthermore, previous research reported that the antimicrobial activities of AMPs against pathogens in the intestine might alert the host immune system to fight against invading agents 45,46 . Mucosal immunity plays an important role in host defense against pathogens 47 . In broiler chickens, when Toll-like receptor (TLR)4 engages to microbe-associated molecular patterns (MAMP), it transmits the information to the cytoplasm of the phagocytes, which in turn leads to the expression of cytokines 48,49 . Different research groups have reported the controversial results regarding the effects of C. perfringens challenge on gene expression of TLR4 in the intestine of broiler chickens. For instance, while some researchers reported that C. perfringens upregulated the TLR4 gene expression in the intestine of chickens 50,51 , other investigators reported no apparent alteration of the TLR4 gene expression in C. perfringens challenged chickens 52 . Therefore, in the current study, we decided to analyze the gene expression of TRAF3, which is one step ahead of TLR4 activation to overcome the possible interference of other immune cells 53 . TRAF3 is a cytoplasmic protein that controls signal transduction from different receptor families, especially TLRs 53 . Following the activation of TLR4 with pathogen attachment, TRAF3 is recruited into signaling complexes, and its activation increases vital pro-inflammatory cytokines production 54,55 . Results of the present study showed that NE upregulated the expression of TRAF3 transcript in the jejunum of chickens, while the antibiotic and AMP significantly decreased the gene expression of this cytokine in the challenged birds, which is consistent with the results of previous studies 54,56 . On the other hand, excessive and long-term production of pro-inflammatory cytokines might result in the gut damage and high energy demand 57 . To prevent the adverse effects of extra pro-inflammatory cells, pro-resolving mediators such as ANXA1 are released into the epithelial environment to orchestrate clearance of inflammation and restoration of mucosal homeostasis 58,59 . ANXA1 is a 37 kDa protein expressed in the apical and lateral plasma membrane in the intestinal enterocytes that facilitates the resolution of inflammation and repair 60 . In the current study, C. perfringens upregulated the expression of ANXA1 mRNA in the jejunum of PC group, which is in agreement with previous report 54,61 , while the antibiotic and AMP significantly decreased the expression of this cytokine, which is firstly reported herein. While there is no well-documented evidence to explain the results of cytokines expression, it could be inferred that antimicrobial activities of the antibiotic and AMP in the current study resulted in the reduction of invading pathogens (based on abovementioned microbial results) in the intestine of PC birds and possibly downregulating the expression of both pro-and anti-inflammatory (i.e. TRAF3 and ANXA1, respectively) cytokine-producing cells. The epithelial barrier consists of tight junction proteins forming the primary lines of defence against a wide range of stimuli from feed allergens to commensal and pathogenic bacteria 62,63 . The disruption of these proteins may result in increasing the intestinal permeability to luminal pathogens 62,63 . Previous studies showed that C. perfringens might attach to the junctional proteins to form gaps between the epithelial cells and disrupt the intestinal integrity 63,64 . In the present study, NE challenge reduced the jejunal expression of OCLN and CLDN1 transcripts, which is in agreement with previous studies 64,65 . Previous studies showed that tight junction proteins, especially CLDN1 and OCLN, have a specific region (i.e. ECS2) containing a toxin-binding motif, NP (V/L)(V/L)(P/A), that is responsible for binding to C. perfringens 63,66 . Following attachment to junctional proteins, C. perfringens could digest these proteins 67 and open the intracellular connection between adjacent epithelial cells resulting in more penetration of pathogens to deeper layers of lamina propria and transmitting to other organs 63,68 . AMP significantly upregulated the expression of these genes in PC group and the antibiotic had no significant effect on the gene expression of junctional proteins. In agreement with the current findings, previous reports demonstrated that AMPs could increase the expression of junctional proteins in different challenge conditions 17,69 . While no exact mechanism has been recognized, there are two suggested theories for the inhibitory effects of AMPs on C. perfringens regarding junctional proteins. In the first theory, it has been suggested that AMPs could directly switch on the expression of regulatory proteins (i.e. Rho family) in the intestine of challenged mice, upregulating the expression of tight junction proteins and ameliorating leaky gut 69,70 . The second theory attributed the beneficial effects of AMPs on tight junctions to their indirect roles in manipulating microflora populations in the intestine. In detail, previous studies showed that the intestinal commensal bacteria like Bifidobacteria and Lactobacilli secrete butyric acid that regulates epithelial O 2 consumption and stabilization of hypoxia-inducible factor. This transcription factor protecting the epithelial barrier against pathogens, resulting in higher expression of junctional proteins 71,72 . Therefore, it can be hypothesized that AMP in the current study upregulated the expression of junctional proteins by both reducing the number of C. perfringens and inhibiting protein disruption by bacterial toxins. Surprisingly, the antibiotic did not change the expression of CLDN1 and OCLN transcripts in the jejunum of chickens, while it could be expected that the antibiotic upregulated the junctional proteins due to the antibacterial nature of antibiotics. In line with the current results, Yi et al. 69 reported that antibiotics might not affect the gene expression of junctional proteins of the epithelial cells after pathogen removal, maybe because of controlling the microbial balance in the intestine. Along with junctional proteins, the luminal mucus layer comprising of mucins plays a defensive role against invasive pathogens 73 . MUC2 widely expresses in goblet cells and secretes into the intestinal lumen to stabilize mucosal layer 73,74 . Any damage to the mucosal layer stimulates the expression of MUC2 to secrete more mucin Scientific Reports | (2020) 10:17704 | https://doi.org/10.1038/s41598-020-74754-x www.nature.com/scientificreports/ and prevent further destruction 74,75 . In the current study, NE significantly increased the gene expression of MUC2 in the jejunum, which is in agreement with the results of previous studies 76,77 . On the other hand, the antibiotic and AMP significantly downregulated the level of this transcript, while the results for AMP was similar to those of NC group. According to the bacterial results in the present study, it could be inferred that the inhibitory effects of AMP on the population of C. perfringens and E. coli might reduce the colonization of these bacteria in the intestine, decrease the destruction of the mucosal layer, and subsequently lessen the expression of MUC2 transcript. The exact mechanism of these effects has not yet been revealed. In conclusion, results of the current study propose that cLFchimera, an antimicrobial peptide originating from camel milk, could reduce mortality and attenuate NE-induced lesions in broilers. Beneficial consequences of AMP use include better growth performance and recovery of villi morphology in the jejunum of NE-imposed chickens. Further, supplementing NE challenged birds with cLFchimera restored the ileal microflora and consequently regulated the expression of cytokines, MUC2, and tight junctional proteins. Therefore, according to the desired results obtained in the present study, cLFchimera can be nominated as a candidate for replacing growthpromoting antibiotics against NE in chickens, while further studies may find other favourable effects of this AMP. Data availability All generated and analysed data in the current study are included in this article, and also cited data were included in the reference list.
2020-10-21T13:06:20.309Z
2020-10-19T00:00:00.000
{ "year": 2020, "sha1": "f1c0072564a9a060207f93a47d3254f4bfd831a7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-74754-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5714f01180f26dbac01bd5acfdf4f6f1e399d853", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
12174287
pes2o/s2orc
v3-fos-license
Benznidazole-Resistance in Trypanosoma cruzi Is a Readily Acquired Trait That Can Arise Independently in a Single Population Benznidazole is the frontline drug used against Trypanosoma cruzi, the causative agent of Chagas disease. However, treatment failures are often reported. Here, we demonstrate that independently acquired mutations in the gene encoding a mitochondrial nitroreductase (TcNTR) can give rise to distinct drug-resistant clones within a single population. Following selection of benznidazole-resistant parasites, all clones examined had lost one of the chromosomes containing the TcNTR gene. Sequence analysis of the remaining TcNTR allele revealed 3 distinct mutant genes in different resistant clones. Expression studies showed that these mutant proteins were unable to activate benznidazole. This correlated with loss of flavin mononucleotide binding. The drug-resistant phenotype could be reversed by transfection with wild-type TcNTR. These results identify TcNTR as a central player in acquired resistance to benznidazole. They also demonstrate that T. cruzi has a propensity to undergo genetic changes that can lead to drug resistance, a finding that has implications for future therapeutic strategies. Chagas disease is caused by Trypanosoma cruzi, a flagellated protozoan parasite transmitted by bloodsucking triatomine bugs. In Latin America, 10 million people are infected, with >15 000 deaths annually [1]. Because of migration, the disease is also undergoing globalization. In the United States, there are an estimated 300 000 infected individuals [2]. Chagas disease has 3 phases; acute, indeterminate and chronic. The acute stage is usually asymptomatic, although it can present as a febrile-like illness in children and young adults, with a fatality rate up to 5%. Most symptoms resolve within 4-6 weeks, and patients then enter the indeterminate stage. In the majority of cases, active disease does not proceed further. However, approximately 30% of individuals progress to the chronic phase, a process that can occur many years after the initial infection. This can result in serious cardiac and digestive tract pathologies, where prognosis is poor. There is no immediate prospect of a Chagas disease vaccine, and infection is lifelong. Chemotherapy is therefore of major importance. For many years, benznidazole and nifurtimox have been the only drugs available [3]. However, their use is characterized by toxicity, and their efficacy against chronic stage disease is unreliable. In addition, cases refractory to treatment are commonly reported [4], and drug-resistant parasites can be selected in the laboratory [5,6]. Benznidazole and nifurtimox are nitroheterocyclic compounds that contain a nitro group linked, respectively, to an imidazole and furan ring [3]. They are prodrugs and require nitroreductase (NTR)-catalyzed activation within the parasite to have trypanocidal effects. Two classes of NTR have been identified in trypanosomes. Type II NTRs are O 2 -sensitive flavin-containing enzymes that are capable of 1-electron reduction of nitro drugs to generate an unstable nitro radical [7]. In the presence of O 2 , this can lead to the production of superoxide anions and regeneration of the parent nitro compound, a process known as redox cycling [8,9]. Although activation of nitroheterocyclic drugs by T. cruzi has been associated with the formation of reactive oxygen species (ROS) and candidate reductases have been implicated, there is no evidence that enhancing the parasite oxidative defense system has a protective affect [10][11][12][13][14][15]. Furthermore, addition of benznidazole to T. cruzi extracts does not lead to the generation of ROS [16]. Type I NTRs are O 2 -insensitive flavin mononucleotidedependent enzymes that can mediate the 2-electron reduction of nitro drugs through a nitroso, to hydroxylamine derivatives. These can react further to generate nitrenium cations and other highly electrophilic intermediates, which may promote damage to DNA and other macromolecules [17,18]. Two enzymes with type I activity have been identified in T. cruzi. The first is prostaglandin F2α synthase [19], although this is only capable of mediating 2-electron reduction under anaerobic conditions. The second, for which there is now strong evidence of a central role in activating nitro drugs, is a nicotinamide adenine dinucleotide, reduced (NADH)-dependent mitochondrial type I NTR [5]. In the case of nifurtimox, an active unsaturated open chain nitrile metabolite contributes to the resulting trypanocidal activity [20]. TcNTR can reduce a range of nitroheterocycles, and deletion of the corresponding genes from T. cruzi and Trypanosoma brucei results in loss of sensitivity [5]. Consistent with this, a genome-wide RNA interference screen of T. brucei for genes associated with nifurtimox and benznidazole resistance by loss-of-function mechanisms identified TbNTR as the major candidate [21]. To investigate the capacity of T. cruzi to develop resistance against benznidazole, we generated resistant clones following in vitro selection. Here, we show that distinct drug-resistant clones can arise independently and that, in each case, resistance under selective pressure is associated with loss of TcNTR activity. Parasites T. cruzi MRAT/COL/Gal61 (Table 1) [22] were cultivated in supplemented Roswell Park Memorial Institute (RPMI) 1640 medium at 28°C [23]. Clones were derived by limiting dilution. Transformed T. cruzi were maintained at 10 μg/mL blasticidin or 50 μg/mL G418. Amastigotes were grown in African green monkey kidney (Vero) or rat skeletal myoblast L6 cells cultured in RPMI 1640/10% fetal bovine serum at 37°C in 5% CO 2 . To generate metacyclic trypomastigotes, epimastigote cultures were grown to stationary phase, at which point they differentiated. These were used to infect monolayers at a ratio of 5 metacyclics per mammalian cell. Following overnight incubation at 37°C, extracellular metacyclics and epimastigotes were removed by several washes. Bloodstream-form trypomastigotes emerged between day 7 and 10, and this homogenous population was used in quantitative infection experiments. Intact T. cruzi chromosomes were extracted using an agarose-embedding technique [24] and were fractionated by contour-clamped homogenous field electrophoresis (CHEFE), using a BioRad CHEFE Mapper. For analysis of natural benznidazole sensitivity, TcNTR from 28 T. cruzi strains from different regions of Colombia was amplified and sequenced. To generate benznidazole resistance, epimastigotes were seeded at the median inhibitory concentration (IC 50 ) and subcultured for several weeks under selective pressure. The drug concentration was then doubled and the process repeated. This was continued until a resistant population was established (61R) at 50 µM, the reported level of therapeutic resistance [25]. IC 50 values were determined by an enzymatic micromethod [26]. A total of 2 × 10 6 epimastigotes/mL were cultured with different drug concentrations for 72 hours at 28°C in 96-well microtiter plates. The plates were then incubated with 10 mg/mL 3(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide (MTT) for 90 minutes and MTT reduction to formazan crystals measured at 595 nm. Construction of Vectors For expression of TcNTR, a 708-base pair fragment corresponding to the catalytic domain of the protein was amplified using DNA from sensitive and resistant clones [5]. Fragments were digested with BamHI/HindIII and ligated into the vector pTrcHis-C (Invitrogen), and the resulting constructs were used to transform Escherichia coli BL-21. To express active protein in benznidazole-resistant T. cruzi, the full-length TcNTR gene (939 bp) was amplified from 61S DNA and ligated into the BamHI/HindIII site of the vector pTEX [27]. Parasites were electroporated and transformants selected with G418. To generate TcNTR heterozygotes from 61S parasites, we used gene disruption with a construct containing a blasticidin-resistance cassette [5]. All constructs were confirmed by sequencing. Biochemical Analysis E. coli transformed with pTrcHis-TcNTR were treated with isopropyl-β-D-thiogalactopyranoside to induce expression of recombinant histidine-tagged proteins, which were purified on nickel-nitriloacetic acid columns [5,28]. Fractions were analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis and protein concentrations determined by the BCA assay (Pierce). TcNTR activity was measured by following the changes in absorbance at 340 nm due to NADH oxidation. The TcNTR flavin cofactor was established by determining the fluorescence spectrum in acidic and neutral buffers [29]. Benznidazole-Resistant T. cruzi Lack One of the Chromosome Bands Containing the TcNTR Gene To select for benznidazole resistance, T. cruzi GAL61 (Table 1) were submitted to continuously increasing drug pressure until we had established a population (61R) that grew at a comparable rate in the presence or absence of 50 µM benznidazole (Materials and Methods). This population displayed approximately 10-fold resistance. Six clonal lines derived from this population exhibited 3-7-fold resistance, when examined independently ( Figure 1A). In the absence of drug, the clones grew slightly slower in culture than the parental cells (doubling times from 28-42 hours, compared with 26 hours) but otherwise displayed no obvious morphological changes. Previously, when we generated nifurtimox-resistant T. cruzi, we found that they were also resistant to other nitroheterocyclic drugs, including benznidazole [5]. A similar cross-resistance phenomenon was observed here, with 2-fold greater resistance to nifurtimox and 4-fold greater resistance to nitrofurazone ( Figure 1B). Nifurtimox resistance in both T. cruzi and T. brucei has been associated with downregulation or loss of a type I NTR gene [5,21]. We therefore examined the benznidazole-sensitive and -resistant cells for changes in copy number at this locus. In the sensitive parental cells (61S), TcNTR is a single copy gene located on chromosome homologues of 1.1-Mb and 0.85-Mb. With the resistant parasites, however, the 0.85-Mb band was missing in clonal and polyclonal populations ( Figure 1C, lanes 2 and 3). There were no other apparent changes to the chromosome profile. To determine whether drug resistance was associated with loss of TcNTR rather than another gene located elsewhere on the missing chromosome, we reintroduced an active copy of TcNTR into 61R clone 2 [27] ( Figure 1D). When the transformed cells were assessed, we found that benznidazole sensitivity had been restored. The Remaining TcNTR Allele in Each Benznidazole-Resistant Clone Encodes an Inactive Protein We next investigated whether the remaining chromosomal copy of TcNTR in the benznidazole-resistant 61R parasites had altered. Genes from the 6 resistant clones were amplified and sequenced. Missense mutation(s) were identified in each case. In clones 1, 2, 4, and 5, there was C/T transition at position 374, compared with the TcNTR gene amplified from sensitive clones. In the protein, this would result in replacement of the evolutionarily conserved Pro-125 with leucine ( Figure 2). With clone 6, in addition to the mutation at position 374, we also identified a missense mutation at nucleotide 460 (C/G), giving rise to the conversion of Pro-154 to alanine. For clone 3, there was a single missense mutation resulting in C/G transversion at nucleotide 477, leading to the replacement of Phe-159 with leucine. No other mutations were observed in the TcNTR genes isolated from the resistant clones. In the O 2 -insensitive E. coli nitroreductase nfsB, most mutations associated with nitrofuran resistance are located in the corresponding region to those in TcNTR ( Figure 2) [30,31]. To determine whether the TcNTR mutations had perturbed activity, we amplified a fragment encoding the catalytic region of the enzyme, using DNA from 61S and 61R clones 3, 4, and 6. TcNTR is mitochondrial, and previous attempts to express active full-length enzyme had been unsuccessful [5]. Activity was only detectable when the amino terminal domain was excluded from the recombinant protein ( Figure 2). After sequence confirmation, the expressed histidine-tagged proteins were purified on nickel columns (Materials and Methods; Figure 3A). Fractions containing recombinant protein derived from the 61S TcNTR gene were yellow, as expected of a flavoprotein. Those containing enzyme derived from the resistant clones were colorless. The capacity of the recombinant enzymes to reduce benznidazole and nifurtimox was established from double reciprocal plots of 1/TcNTR activity against 1/drug concentration, at a fixed NADH concentration (100 μM) (Materials and Methods; Figure 3B and 3C). For the enzyme derived from the sensitive clone, we established apparent K m values (±SD) of 28.0 ± 2.7 μM for benznidazole and 15.5 ± 3.5 μM for nifurtimox. Further analysis gave apparent V max values (±SD) of 1824 ± 154 nmol NADH oxidized per minute per milligram for benznidazole and 399 ± 14 nmol NADH oxidized per minute per milligram for nifurtimox. When each of the mutant TcNTRs were analyzed, no activity could be detected, even when 10 times as much recombinant protein was used. We then investigated the mutant proteins for flavin binding ( Figure 3D), using fluorescent detection under neutral and acidic conditions (Materials and Methods). At neutral pH and with an excitation wavelength of 450 nm, the flavin mononucleotide standard and 61S TcNTR-derived cofactor both gave a fluorescence profile that peaked at 535 nm, a signal that was quenched under acidic conditions. By contrast with FAD, the 535 nm peak occurs at pH2 and is quenched at pH7. No flavin fluorescence was detected with mutant TcNTR protein ( Figure 3D). Infectivity of Benznidazole-Resistant Parasites To investigate the scope for drug resistance in the field to result from loss/inactivation of TcNTR genes, we examined the effects of these events on infectivity. First, we generated heterozygous parasites to test for haploid insufficiency. One TcNTR allele in the 61S genome was disrupted by targeted integration (Supplementary Figure 1). The 61S TcNTR+/− epimastigotes grew at the same rate in culture as homozygotes, and to the same density. When these heterozygotes were examined for benznidazole resistance, there had been a 4-fold increase ( Figure 4A). These parasites were used to infect rat myoblast L6 cells. No differences were observed in the ability of the heterozygotes to develop into infective metacyclic trypomastigotes, to invade cells ( Figure 4B), to grow as intracellular amastigotes ( Figure 4C), and subsequently to differentiate into bloodstream trypomastigotes. Therefore, drug resistance that arises through loss of 1 copy of TcNTR is not associated with a reduction in infectivity in vitro. The infective phenotype of the 61R resistant clones, which contain a single inactive copy of TcNTR, was also examined. In culture, epimastigotes differentiated into metacyclic trypomastigotes at a level similar to sensitive clones. When culturederived trypomastigotes were used to initiate infections, all the resistant clones tested (clones 3, 4, and 6) were able to develop through the intracellular cycle as amastigotes and differentiate into bloodstream trypomastigotes, which were released following host cell lysis. At 2 levels, however, we observed a reduction [30]. Full-length copies of TcNTR from 61R resistant clones were amplified and sequenced. Differences in the amino acid sequence as compared to the parental TcNTR (61S) were restricted to a single region and are highlighted in red. Several 61S clones were sequenced, but no differences were identified. The sequence in this region of 61S TcNTR (residues 112-162) is identical to that in the genome strain CL Brener (GenBank accession no. XP_810645). The corresponding CL Brener TcNTR residues are numbered 110-160 because of an insertion or deletion in the amino terminal domain. Mutations in the corresponding region of E. coli nfsB that confer nitrofurantoin resistance are indicated by asterisks [31]. in virulence. When Vero cells were used ( Figure 4D), the number infected by resistant clones was significantly less than the level observed with the parental sensitive parasites ( Figure 4E), and the average number of amastigotes per infected cell was reduced ( Figure 4F). When L6 cells were infected with drug-resistant metacyclics, although released trypomastigotes could be observed, their numbers were too few for a quantifiable infection assay to be performed. This compares to an infection rate of approximately 25% in the case of the 61S TcNTR heterozygotes and homozygotes ( Figure 4B). These experiments therefore suggest that functional loss of both TcNTR genes, by the mechanisms identified here, is associated with a reduction in virulence that would reduce the capacity of highly drug-resistant parasites to spread within the population. TcNTR Diversity and Benznidazole Sensitivity in the Field To explore possible relationships between natural susceptibility to benznidazole and TcNTR, we sequenced the gene from 28 Colombian strains of different biological and geographical origins and with a range of benznidazole sensitivities (IC 50 , 1.5-35 μM) ( Table 1). TcNTR length varied between 939 and 951 nucleotides in these strains, mainly because of changes in the copy number of a trinucleotide (ATC) 5-9 located between residues 210 and 238. This region of the protein is not required for enzyme activity [5]. Excluding this repeat, we identified 42 polymorphisms, 25 of which were nonsynonymous. These amino acid differences were restricted to 7 strains, all but one of human origin. None of the polymorphisms were located in the region of TcNTR where we had identified mutations associated with benznidazole resistance. Most were located in the amino terminal extension (Supplementary Figure 2). The major amino acid haplotype group encompassed 21 strains of various biological and geographical origins. Importantly, these had a wide range of benznidazole sensitivities (IC 50 , 4-35 μM) ( Table 1). This extensive natural variation is therefore independent of TcNTR sequence and must be due to other factors. This suggests that resistance arising from changes to TcNTR is an acquired trait that requires selective pressure. DISCUSSION Despite being the frontline drug against T. cruzi infections for >40 years, benznidazole has drawbacks [4,32]. It can have serious side effects, it requires long-term administration (30-60 days), and its efficacy against chronic stage disease is inconsistent. Treatment failures are widely reported, although the extent to which this is an acquired trait or reflects diversity in the level of susceptibility within natural parasite populations is unknown [33]. As shown here and elsewhere [5,34,35], laboratory selection of drug-resistant T. cruzi is readily achievable, but in the case of benznidazole and nifurtimox, it is only recently that a mechanism has been identified [5]. Activation of these prodrugs by the trypanosome type I NTR, an enzyme absent from mammals, is central to their mode of action and explains why they are more toxic to the parasite than to the host. The 61R benznidazole-resistant T. cruzi clones that we investigated were characterized by loss of a 0.85-Mb chromosome band containing TcNTR. Genome plasticity is a common phenomenon in trypanosomes [24]. Confirmation that reduced TcNTR expression caused this resistance was provided by reversion of the phenotype following reintroduction of the gene. Unexpectedly, we also found that in each of the 61R clones examined, the TcNTR gene on the 1.1-Mb chromosome homologue had acquired missense mutation(s) that rendered the expressed product enzymatically inactive (Figures 2 and 3). The most parsimonious explanation for our data is that drug pressure led initially to selection of benznidazole resistance because of loss of the TcNTR-containing 0.85-Mb chromosome. Continued treatment then resulted in selection, from within this population, of distinct lineages in which mutation(s) had inactivated the remaining TcNTR gene. The acquisition of 2 distinct missense mutations in TcNTR of clone 6 (nucleotides 374 and 460) implies consecutive events. This 2-step process is reminiscent of what happens in E. coli, where increased nitrofuran resistance resulted from consecutive mutations in the type I NTR genes nfsA and nfsB [31]. The mutant TcNTR proteins were found to be deficient in flavin mononucleotide binding. In the NTR group of enzymes, the location of flavin binding is highly conserved within the overall structure [30]. All of the mutations in TcNTR were restricted to a region (residues 125-159; Figure 2) that, in the E. coli enzyme, contains residues that interact with the isoalloxazine O2, N3, O4 face of flavin mononucleotide [30]. The mutation of residue 125 resulted in conversion of an evolutionarily conserved proline to a leucine (clones 1, 2, 4, 5, and 6). At position 154 in clone 6, proline was converted to alanine. Both changes would be expected to perturb structure. In clone 3, the mutation associated with disruption of flavin mononucleotide binding involved conversion of phenylalanine 159 to leucine. Phenylalanine is present at the corresponding position in E. coli and T. cruzi NTRs (Figure 2), suggesting a functionally conserved role. The ability of distinct TcNTR-deficient T. cruzi clones to arise independently in a single population is strong evidence that the drug-activating properties of this enzyme are central to the trypanocidal mechanism. The TcNTR single knockouts were 4-fold less susceptible to benznidazole (Figure 4), a level of resistance that is significant in the context of this drug, for which the therapeutic window is limited [3]. The virulence properties in vitro were also indistinguishable from TcNTR homozygotes. This potential for benznidazole resistance by a straightforward mechanism, coupled with the absence of haploid insufficiency, may explain some of the observed treatment failures. The inability of the 61S strain to produce a patent infection in mice has restricted us from investigating this further. Complete loss of TcNTR activity in the 61R resistant clones did however have a detrimental effect on infectivity in vitro (Figure 4). This implies that in vivo there will be a limit to the extent of benznidazole resistance achievable by mechanisms involving TcNTR (approximately 4-fold), since parasites need to retain a residual level of enzyme activity. When we investigated possible relationships between susceptibility to benznidazole and TcNTR sequence in a diverse group of parasites (Table 1), we found no correlation. These data suggest that natural variation in sensitivity does not involve mutations in TcNTR and that resistance by this mechanism may be a trait that arises only after selective pressure. Currently, there is no information on the extent to which treatment failures reflect natural or acquired resistance. An observation, which has wider implications for treatment of Chagas disease, is the ease with which drug resistance can arise. In a single experiment, we identified 2 distinct mechanisms, chromosome loss and point mutation, which acted to reduce TcNTR activity. In the latter case, 3 distinct, independently acquired mutations were identified. T. cruzi is extremely diverse, with a genome characterized by extensive and highly variable surface antigen gene families [36]. This antigenic diversity may have arisen in response to selective immune pressure during evolution, which acted to limit the proofreading ability of DNA polymerase and/or DNA repair mechanisms. As a consequence, the parasite may have acquired an ability to readily develop drug resistance by mutational mechanisms such as those described here. This is an important consideration that should inform drug development strategies for Chagas disease. Supplementary Data Supplementary materials are available at The Journal of Infectious Diseases online (http://jid.oxfordjournals.org/). Supplementary materials consist of data provided by the author that are published to benefit the reader. The posted materials are not copyedited. The contents of all supplementary data are the sole responsibility of the authors. Questions or messages regarding errors should be addressed to the author.
2017-04-13T23:19:25.182Z
2012-05-02T00:00:00.000
{ "year": 2012, "sha1": "4e8a9221e8ed0718b361c74075062c3134ff3cca", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jid/article-pdf/206/2/220/18067959/jis331.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "29077f0adeb1c1f56b681a43b0984f0759e4fb0e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246234091
pes2o/s2orc
v3-fos-license
Competency of Professional Accountant in Malaysia as Attributes towards Compliance of AMLA 2001 Money laundering offences have becoming important issues worldwide. Within the accounting fraternity, discussions have involved an important issue related to the low level of compliance by professional accountants towards the anti-money laundering regime in Malaysia. In the context of this study, anti-money laundering regime refers to both the legislative requirements of the Anti-Money Laundering, Anti-Terrorism Financing and Proceeds of Unlawful Activities Act 2001 (AMLA 2001) and the international standards requirements of the Financial Action Task Force (FATF). Since the introduction of AMLA 2001, professional accountants (includes auditors, tax consultants and practicing accountants) have been named as a reporting institution. As a reporting institution, professional accountants are required to implement compliance programs and to submit related suspicious transaction reports. Three compliance programs named by the anti-money laundering regime are namely “Know your customers or KYC”, “Clients due Diligent CDD” and “Record keeping”. This study was undertaken to examine the importance of competency and training among professional accountant towards money laundering compliance. This study used Protection Motivation Theory to explain on the importance of that attributes. A questionnaire survey was developed and sent to professional accountants. Findings from this study have identified that “competency & training” found to be significantly related to the level of compliance towards anti-money laundering regime. Introduction Money laundering phenomena it's not new anymore to be introduced. The cases has widely known by many people and Malaysia have seen an increase of money laundering activities from year to year. In Malaysia, AMLA was introduced in 2001 with the objective to mitigate money laundering offences. Initially, AMLA 2001 was enforced to all financial institution however in 2004 it was imposed to designated non-financial businesses and profession (DNFBP) which includes auditor, accountant, lawyer, notaries, real estate agents, trust, dealer of precious metal and stone and casinos. This group are responsible to comply with AMLA 2001. Due to the serious implications of this offences there are provisions in AMLA 2001 enabling to freeze, seize, and forfeit the illegal proceeds of crime as a stringent measure to mitigate money laundering activities. BNM stated that there are more than a hundred money laundering cases involving companies and individuals that are still waiting for trials (BNM website, 21 May 2019). Managing money laundering issues is challenging and difficult, as the money launderers had craftily analyses every aspect of the related laws to identify advantages on any loopholes. Currently, there are cases that have taken more than two years in collecting evidence, yet the prosecution is still ongoing due to the difficulty of determining the primal wrongdoings (Zakiah & Khalijah (2012). The advancement of the digital financial technology had further allow money launderers to become more creative whilst the legislation or regulation to mitigate such offences is still lagging or too slow to being operationalized. To further complicate matters, money launderers have started using the expertise of lawyers, accountants, auditors, company secretaries, and money changers in their criminal activities to facilitate on concealing their activities (Cabana (2007) and Ruiz (2004). Globally, this group of professionals mention above are named as Designated Non-Financial Business or Profession (DNFBP). DNFBP is a reporting institution listed in Paragraph 3.3 (a) to (k) of the AMLA 2001. On 2004, DNFBP also becoming one of the reporting institution and needs to comply with AMLA 2001. Professional accountants during the audit task should be able to detect any money laundering activities and being able to report to the BNM. It is understood that professional accountant should carried out all AMLA procedures known as CDD, KYC and record keeping Currently, money launderers choose professionals like accountants and lawyers because the oldest modus operandi were through financial system, for example the banking institution, may not be easy for them. Banking institution had performed a number of efforts to mitigate money laundering. Most of the banks may easily identify those suspicious activities through their system and their well-trained employees. Professional accountants or lawyers are always chosen by the money launderers to act as a nominated person. They are appointed to perform certain duties or designated to perform certain acts or functions. In the money laundering context, they are aware of the risk, received instruction, performing task, and trustworthy (Nikoloska & Simonovski, 2012). Accountants and lawyers may not realize that they have aided the money launderers to set up new company, purchase assets, and invest in securities and others. This study will identified whether competency is determinant for the professional accountant to comply with AMLA 2001 rules and regulation. Literature Reviews and Research Theoretical Money laundering regime and development in Malaysia From the legislative perspective, the Malaysian Anti-Money Laundering Act 2001 defines money laundering as an " act that engages, directly or indirectly, in a transaction that involves proceeds of any unlawful activity, acquires, receives, possesses, disguises, transfers, converts, exchanges, carries, disposes, uses, removes from or brings into Malaysia proceeds of any unlawful activity; or conceals, disguises or impedes the establishment of the true nature, origin, location, movement, disposition, title of, rights with respect to, or ownership of, proceeds of any unlawful activity" (AMLA 2001, Part II, Section 4). Money laundering activities usually carried out through "structuring", "smurfing" or "blending". Splitting large amount of financial transactions into smaller amounts is intended to avoid detection of money laundering. "Structuring" can be implemented through a series of "smurfing" which additionally makes it's difficult to identify money laundering. Structuring and smurfing can be used in placement which is the first stage of money laundering process and intends place illegal funds into financial system. The final stage known as integration, involves the integration of funds into the legitimate economy. This is realized through the purchase of assets such as real estate or luxury goods or by "blending" which means providing illegal funds to legal business by mixing such funds with legal income (Dobrowolski, 2019). Being the gatekeepers of the AMLA regime, reporting institutions are encourage to systematically strengthened their procedures and enhance their roles in mitigating money laundering offences. The new FATF recommendations issued in 2012 for DNFBP, had no significant changes but the recommendations have been simplified to only two recommendations. Currently, DNFBP only need to comply with Recommendations 22 and 23. Recommendation 22 is specifically addressing on customer due diligence (CDD), which should be read together with Recommendations 10, 11, 12, 15 and 17 of the FATF Recommendation. Recommendation 23 is specifically on the regulatory and supervision requirement (Normah et al., 2016b). On the other hand, the compliance institution also need to comply with Know Your Customer (KYC), Client due diligent (CDD) and record keeping. All reporting institutions including DNFBP are required to take reasonable steps to gather the evidence about the identity of the new clients and to keep good update should there is any new information. The regulated firms also need to understand the client's nature of business. The maintenance of good KYC will help the firms to effectively manage the money laundering risk by reducing the likelihood that they will take a money launderer as their new customer or client and increasing the likelihood that they will detect the use of products and services for money laundering activities (Harvey, 2004). KYC rules start with the collection of client identification document, verification of document, to keep detailed and up-to-date records of their business relationship for a specified time, report of any suspicious transaction to the competent authority (FIED) and finally to respond to any enquiries from the FIED regarding the suspicious transactions. All reporting institutions of financial background or not will become a "traffic warden" that is required to provide information on the flow of the financial traffic, or in other words, they are responsible to detect any unusual or uncommon activities of their clients to the competent authority (Amicelle, 2011). CDD involves conducting background investigations in order to ensure potential clients are not terrorist financiers, sanctioned persons, and members of organized crime groups (FATF, 2010). The vital element in conducting CDD is that the accountants must identify the background of the prospective clients and verify the identity of that client by cross referring them to reliable and independent documents. In short, from the literal explanation, CDD includes two definitions at least, which are due obligation and duty, and making persevering efforts to voluntarily obtain enough information (Ai, 2009). CDD procedure is not only performed at the acceptance of the client, but also during the lifespan of the relationship with the client, as for accountants, the CDD procedure will be until the end of the contract of providing accounting or auditing services. Shehu (2010) stresses that CDD procedure must be written, documented, and clear; and adopt a risk based approach, so that the procedures are different from one client to another client based on their risk profile. The risk profile is determined by looking at the services of product offered, geographical location, complexity of business activity, and background of the client or beneficial owner. Records of the customers and their transactions should be kept for a minimum of five years. This is necessary to provide a paper trail to the authorities in such case they need be to take legal action against the customer for any criminal activity (Sharma, 2005). In addition, the records kept must enable the reporting institutions to establish the history, circumstances, and reconstruction of each transaction. The records shall include at least the identity of the customer, the identity of the beneficiary, the identity of the person conducting the transaction, where applicable, the type of transaction (e.g. deposit or withdrawal), the form of transaction (e.g. by cash or by cheque), the instruction, origin, and destination of fund transfers, and the amount and type of currency (Dhillon, 2013). Shehu (2010, as cited in Kemal, 2014 adds location, method, and frequency of the transaction, as well as the source of the fund as part of the record are also needed. During an investigation, the ability to trace historical transaction is tremendously important (Sathye & Islam, 2011). However, any document that is related to any money laundering case should be kept longer than the stipulated minimum requirement of six years, since it serves as an important evident (Haynes, 2008). Mugarura (2011) also claimed that training employees in record keeping and reporting suspicious transactions are the best tools to fight money laundering. Competency Competency emphasises an individual's capabilities and competence to cope with the task or make a choice (Bandura, 1977(Bandura, , 1991. Competency has been shown to have a significant impact on an individual's ability to accomplish task (Compeau & Higgins, 1995). Competency is also known as expertise. Expertise can be divided into two. Firstly, it can be considered as knowledge that claims to be highly relevant for practical purposes. This is due to the nature of the type of knowledge that can be qualified as 'expertise'. Expertise is stored in professions and organisations, rather than in academic disciplines. Expertise can only be earned through working experience, which means, the longer you do the task, the more expert you will be. Secondly, expertise is a kind of knowledge that claims to be correct. This claim is difficult to be challenged by the users. Expertise is the knowledge produced and administered by specialists and can only be challenged by specialists. The expert competence is considered so advanced that it cannot be evaluated or controlled by people who do not have the same education and experience (Jacobsson, 2000). The inexpert person needs to trust the expert person (Hulsse & Kerwer, 2007). A research performed by the Ifinedo (2011) on the information system security policy has suggested that competency would have a positive effect on the information system security compliance. This is consistent with the study done by Verkijiya (2018), which also found positive relationship between self-efficacy and smart phones security behaviour. The study also found that the detailed and comprehensive training provided had a significant gap in the attitudes between senior management and juniors. While senior management's attitudes were positive, junior staffs tend to perceive anti-money laundering activities as an extra burden in terms of banking operations due to the incomprehensive training given to them (Simwayi & Guohua, 2011). In the context of banking institution, competencies are known as crucial factors in detecting money laundering activities at the beginning, when being approached by the new customers, to assess the level of risk until writing in the suspicious transaction reports. A frontline officer has been named as first line of "defences" in screening the customers for money laundering risk potential. The damages become unbearable if money laundering risk has passed through the frontline officers due to the failure of frontline officers to screen the new customer's background. However, any money laundering risk detected after reviewing the transaction of the new customer can be reported to BNM as suspicious transaction and the bank should not notify the customer regarding the report made to BNM (Isa, Sanusi et al., 2015). To continue promoting and maintaining expertise of human capital, this human capital should be provided with continuous education and training that are relevant, for example cutting edge technology, new financial instruments, regulation, policies, and the ability to trace and track illegal behaviour in the digital economy (Vaithilingam & Nair, 2007). Competency of professional can be enhanced through providing sufficient training to the personnel. In addition, providing training and ensuring that only the properly trained personnel handle any transactions, have become legal requirements in many countries (Sathye & Islam, 2011). The importance of continuous training increases with the fast-paced advancement in money laundering techniques and the ever-evolving money laundering regulations. Continuous training programme will make the personnel to be updated with the sophisticated technology and new typologies of money laundering activities, thus making them to be motivated to detect money laundering transaction. Therefore, professional accountants should examine their anti-money laundering training strategies, goals, and objectives on an on-going basis and maintain effective anti-money laundering programmes that reflect the best practices. The training given should clarifies about the AML legislation, rules, and procedures; the requirement of legislation bodies; the duties of competent authority as well as reporting institution; documentation; and most importantly to deal with typical customer and transaction of money laundering activities. All employees should be trained to be alert and watchful with regard to documents and answers by clients (Verhage, 2009). This is supported with the research done by Nagela (2019) on tax training which training may have increased the entrepreneur's knowledge about their financial obligations, regulations concerning deductions of business costs and their awareness with regard to penalties for incorrect deductions and increase compliance. According to Zimeles (2004), training should include more real-life examples using a case study or role play. It is also found that, an effective training should focus more on the risks faced by the employees and needs of the reporting institutions, especially when it is given to the employees that work closely with the clients. This is because, if they are facing with a situation of suspicious activities, these employees are in the best position to detect the activities. Another research done by Dusabe (2016) found that knowledge and skill for individuals working in the AML sector, such as the FIU members, regulators, public prosecutors, the judiciary and the law enforcement bodies, are important to make them capable complying with the requirements. Research Theoretical This study will adopt Protection Motivation Theory (PMT), it has two components which are threat appraisal and coping appraisal. Threat appraisal includes three factors that explain how threats are perceived. These are basically rewards or benefits (any intrinsic or extrinsic motivation to increase or keep an unwanted behaviour), severity (the magnitude of the threat), and vulnerability (the extent to which the individual is perceived to be susceptible to the threat). Coping appraisal includes three factors that explain an individual's ability to cope with the threat. It is response efficacy (the belief in the perceived benefits of the coping action by removing the threat), response cost (cost incurred to the individual in implementing the protective behaviour), and self-efficacy (the degree that he or she believes it is possible to implement the protective behaviour). This study, however will focused on the self-efficacy or renamed as "competency" to facilitate understanding of the readers. Research Methodology This research adopts a quantitative research method by employing survey research as the primary method. A questionnaire survey is used as a main instrument to collect data from professional accountants, who are also being members of the DNFBP group. The final questionnaires were sent to the selected sample either email or by post. Simple random sampling were used in this study. Constructs have been operationalised using the Likert scales. The Likert type scale is a common approach used to measure a wide variety of latent constructs (Kent, 2001). In this research, the seven-point Likert scale, ranging from (7) strongly agree to (1) strongly disagree were used for anti-money laundering compliance (independent variables). For the dependent variable which refer to competency; (7) strongly implemented to (1) strongly unimplemented were applied for the implementation of the anti-money laundering compliance as stipulated by AMLA 2001 and FATF recommendations. Instead of using one single measurement for antimoney laundering compliance, the study examines all three requirements of AMLA 2001 which are CDD, KYC and record keeping. There are eight (8) items in the training & competency, namely (i) importance of AML training in detecting money laundering activities; (ii) extensive training to be provided by both BNM and MIA to improve money laundering detection capabilities among the professional accountants; (iii) training in money laundering will assist them in recognizing suspicious transaction; (iv) training materials to be used as a reference; (v) knowledge sharing gained from the training session with other members; (vi) adequate expertise gained from training to implement money laundering requirement as stipulated by AML regime (vii) training enhances ability to detect money laundering activities and (viii) are you capable to detect money laundering activities. This study used SPSS to analysed all the data captured from the survey questionnaire. Some of the statistical procedures undertaken for the study would include techniques used to test data normality, data reliability, means scores, correlation between variables and linear regression analysis. Regression analysis is a set of statistical method to estimate the relationship between dependent variable and one or more independent variables. The regression equations are as follows: MODEL : ALL = a +a1 +s Where, Dependent Variable ALL: Magnitude of the variable referring to CDD, KYC and record keeping. Result and Discussion This study focused to identify whether competency influence the compliance towards AMLA 2001 by professional accountants. Here, this item refers to a situation where professional accountants seek new knowledge through training in the area of anti-money laundering regime (AMLA 2001 and FATF) in order to enhance their competency in detecting and mitigating money laundering offences. Based on descriptive analysis, interestingly, about ninety-five percent (95%) of the respondents agreed on how five training-related items (i.e. knowledge sharing, training material, attendance of training, extensive training and the importance of training) contribute positively to the professional accountants. However, such training may not adequately enhance their competency and ability to either implement antimoney laundering regime effectively or to increase their ability to detect money laundering activities. Competency needs a lot of exposure and experience. The professional accountants need more time before they can be competent in detecting money laundering activities. Lack of competency among professional accountant would become one of the reason not to comply with AMLA 2001 and finally destroy the initiative of BNM to reduce money laundering cases. Reliability test had performed for the competency & training component which KMO at 0.769. Cronbach's a value of 0.6 is the rule of thumb for describing internal consistency and it has been suggested that Cronbach's a value of 0.6 and greater indicate excellent congruence and composite reliability (Zinbarg et al., 2006). Hence the reliability test on competency & training was accepted. The skewness and kurtosis of competency & training which are 0.292 and 0.143 fell between 1.90 and -1.96 which means normality can be assumes at 0.05 significance level. In addition, there is no evidence of multicollinearity as the correlation between the variable fell below 0.9. Refer to Table 1 for the regression result. Competency is positively influence compliance among professional accountant in accordance to money laundering requirement. This study are consistent with the studies done by (Ifinedo, 2011;Verkijiya, 2018;Lwin et al., 2012). The results can be supported by the existing literature that the importance of continuous training increases within the fast-paced advancement in money laundering techniques and the ever evolving money laundering regulations. Continuous training programme will make professional accountants up to date with new AMLA 2001 and FATF requirement, latest and sophisticated technology (hardware and software) and new typologies of money laundering activities. In addition, Dusabe (2016) found that knowledge and skill for individuals working with the AML sector is crucial so that they are capable to comply with the requirements. Lacking knowledge on the basic of AML requirements as well as limited resources has negatively affected the smooth implementation of AML programmes (Subbotina, 2009). Training and competency will become an important weapon in the implementation of anti-money laundering programme. Training provided by regulators as well as reporting institutions is known to be a gateway for the AMLA implementation. Without proper knowledge and training, all programmes relating to AMLA cannot be attainable. MIA may encourage training on AMLA regime as part of important training that should be obtained by all professional accountants every year. This is to ensure that all professional accountants keep up to date with any changes within the laws. CPE hours can be introduced to make the training become compulsory. Competency is important in the process of detection of any suspicious transactions and activities; however, the professional accountant needs to be aware of the nature of client's business and industry so that it will help to determine any suspicious transactions. BNM should encourage all reporting institution to always add their knowledge and expertise by attending training and seminars. Those training and seminars can be initiated by BNM, MIA, MICPA or any other institutions. As conclusion, this study highlight on the compliance among professional accountant towards AMLA 2001 regime. It is discovered that training and competency becoming one of the determinant that encourage compliance among professional accountants. Extensive training will promote competency among professional accountants and this will lead to more compliance towards AMLA 2001 regime. As for recommendations, provision of training and workshop by the accounting bodies in Malaysia such as MIA, MICPA, ACCA and ICAEW are highly appreciated. The training should strengthening the understanding of professional accountant towards AMLA 2001 regulation and covers widely area such as real life example of money laundering transactions, the audit trail, the movement of money laundering from one region to another region, the used of software or any artificial intelligence for detection mechanism of money laundering transaction and many more. Hopefully, by then compliance industry (professional accountants) will achieve full compliance of AMLA 2001 in the future.
2022-01-24T16:02:50.028Z
2022-01-13T00:00:00.000
{ "year": 2022, "sha1": "bc8cebfeb1f611b399d5eed541fc3cd53f28ec09", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/11180/competency-of-professional-accountant-in-malaysia-as-attributes-towards-compliance-of-amla-2001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b5f75906a52ea24f5e7bf2ef52a37fe05cc3bcfc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
3742775
pes2o/s2orc
v3-fos-license
Proteomic characterization of high-density lipoprotein particles in patients with non-alcoholic fatty liver disease Background Metabolic diseases such as obesity and diabetes are associated with changes in high-density lipoprotein (HDL) particles, including changes in particle size and protein composition, often resulting in abnormal function. Recent studies suggested that patients with non-alcoholic fatty liver disease (NAFLD), including individuals with non-alcoholic steatohepatitis (NASH), have smaller HDL particles when compared to individuals without liver pathologies. However, no studies have investigated potential changes in HDL particle protein composition in patients with NAFLD, in addition to changes related to obesity, to explore putative functional changes of HDL which may increase the risk of cardiovascular complications. Methods From a cohort of morbidly obese females who were diagnosed with simple steatosis (SS), NASH, or normal liver histology, we selected five matched individuals from each condition for a preliminary pilot HDL proteome analysis. HDL particles were enriched using size-exclusion chromatography, and the proteome of the resulting fraction was analyzed by liquid chromatography tandem mass spectrometry. Differences in the proteomes between the three conditions (normal, SS, NASH) were assessed using label-free quantitative analysis. Gene ontology term analysis was performed to assess the potential impact of proteomic changes on specific functions of HDL particles. Results Of the 95 proteins identified, 12 proteins showed nominally significant differences between the three conditions. Gene ontology term analysis revealed that severity of the liver pathology may significantly impact the anti-thrombotic functions of HDL particles, as suggested by changes in the abundance of HDL-associated proteins such as antithrombin III and plasminogen. Conclusions The pilot data from this study suggest that changes in the HDL proteome may impact the functionality of HDL particles in NAFLD and NASH patients. These proteome changes may alter cardio-protective properties of HDL, potentially contributing to the increased cardiovascular disease risk in affected individuals. Further validation of these protein changes by orthogonal approaches is key to confirming the role of alterations in the HDL proteome in NAFLD and NASH. This will help elucidate the mechanistic effects of the altered HDL proteome on cardioprotective properties of HDL particles. Electronic supplementary material The online version of this article (10.1186/s12014-018-9186-0) contains supplementary material, which is available to authorized users. cardiovascular disease (CVD), potentially mediated by obesity, elevated plasma triglyceride and low density lipoprotein (LDL) cholesterol levels, and altered high-density lipoprotein (HDL) cholesterol levels, reflecting an overall atherogenic lipid profile [2]. HDL particles serve multiple essential functions. Apart from reverse cholesterol transport (RCT) promoting lipid efflux from cells, they also contain proteins which function as acute-phase response proteins and impart tissue-protective anti-inflammatory, anti-oxidative and anti-thrombotic properties which are also anti-atherogenic [3]. In obese and diabetic individuals, plasma levels of HDL-C are often reduced, with a preponderance of small, dense HDL particles. This is further exacerbated in patients with NAFLD and NASH [4]. It has been proposed that these particles are dysfunctional increasing the risk for atherosclerosis [5]. Proteomics has been used previously to examine HDL particle composition in patients with cardiovascular disease pathologies. Analysis of HDL particles from patients with coronary artery disease (CAD) patients revealed an enrichment of APOE, APOC-IV, PON-1, complement C3 and APOA-IV in HDL particles when compared to healthy controls [6]. Recent studies described increases in abundances of serum amyloid A, C3 and inflammatory proteins in CAD which suggested a shift from an anti-inflammatory role to a pro-inflammatory state of HDL particles [7,8]. These studies suggest that specific protein alterations in HDL particles may lead to altered HDL function. To date, no studies have investigated changes in the HDL proteome in NASH or NAFLD, even though HDL particle size has been reported to change in NAFLD patients [4]. Several studies have examined proteomic differences in serum or liver samples between controls and patients with different degrees of NAFLD to identify possible biomarkers for progression of NAFLD [9]. However, specific proteomic changes in HDL particles have not been examined. In this study, we explored whether differences in HDL-associated proteins could potentially point to alterations in HDL functions in patients with NAFLD and NASH. From our clinical cohort, we selected five individuals with normal liver histology, five individuals with steatosis, and five individuals with NASH. Individuals were matched by age and BMI. The HDL proteome was characterized using high-resolution mass spectrometry (MS) after enrichment of HDL particles from serum. To minimize the amount of serum required for analysis, we used size exclusion chromatography (SEC) to enrich lipoprotein particles [10]. Comparing the HDL proteome between normal, SS and NASH subjects, we detected nominally significant quantitative differences in HDL proteins. Gene ontology analysis revealed that proteins potentially affecting anti-thrombotic functions were decreased with increased disease severity. This change in putative HDL-associated proteins may contribute to increased tissue injury and cardiovascular disease risk in NAFLD patients, thereby negatively impacting patients diagnosed with NAFLD. Recruitment and sample collection The study protocol was approved by the Medical College of Wisconsin's Institutional Review Board. Subjects gave written informed consent for participation in the study. Subjects were females of Northern European descent, morbidly obese (BMI ≥ 40 kg/m 2 or > 35 kg/ m 2 with significant co-morbidities) with documented unsuccessful dietary attempts to lose weight; and who underwent bariatric surgery. A liver biopsy was collected intra-operatively from all patients for histological phenotyping. Patients with alcohol intake > 20 gm/day and those with other liver diseases (hepatitis B, hepatitis C, auto-immune hepatitis, primary biliary cirrhosis, Wilson's disease, alpha-1 antitrypsin deficiency, or hemochromatosis) based on positive serological tests and suggestive liver histology were excluded. Patients using drugs associated with NAFLD (systemic glucocorticosteroids, Tamoxifen, Tetracycline, Amiodarone, Methotrexate, Valproic Acid, anabolic steroids, estrogens at doses higher than those used for hormone replacement, or other known hepatotoxins) were also excluded. Fasting blood samples for serum extraction, and clinical and biochemical data were collected from all subjects in the morning of the scheduled surgery. Histological evaluation and diagnosis All liver biopsy samples were read by an expert pathologist (R.K.) to define the NAFLD phenotype and semiquantitatively score the individual histological features and subphenotypes including steatosis, lobular and portal inflammation, hepatocellular ballooning, Mallory's hyaline and fibrosis according to the scoring system of the NIH NASH Clinical Research Network working group [11]. Subjects with 0-5% macrosteatosis were diagnosed as non-NAFLD controls. NAFLD was diagnosed when ≥ 5% macrosteatosis was present. Using a strict pathologic protocol based on Dixon [12] to define NASH, each liver biopsy specimen was classified as: (1) simple steatosis alone, (2) possible NASH (> 5% steatosis plus one of the following zone 3 centrilobar findings: lobular inflammation, hepatocyte ballooning with or without Mallory's hyaline, pericellular/ perisinusoidal fibrosis), (3) definite NASH (> 5% steatosis plus two of the following zone 3 centrilobar findings: lobular inflammation, hepatocyte ballooning with or without Mallory's hyaline, pericellular/perisinusoidal fibrosis), or (4) normal. Patients classified into groups 2 and 3 were combined for the purposes of this analysis. NAFLD was defined as the combination of classifications (1)-(3), covering the complete spectrum from SS to NASH. Separation of serum lipoprotein particles and Nano-HPLC-MS/MS HDL particles were enriched by SEC, as previously described [10] from 100 μl aliquots of serum for each injection. Fractions positive for Apo-AI and in the expected size range for HDL particles were combined and processed for delipidation by extraction with methanol-chloroform to isolate HDL particle-associated proteins. Proteins were quantified and prepared for MS analysis as described before [13]. Protein digests were analyzed on a ThermoFinnigan LTQ ion trap mass spectrometer (Orbitrap Velos) interfaced with a nano-LC system (Waters) equipped with an autosampler through which samples were loaded onto a C 18 capillary column (15 × 0.75 mm). The capillary column was packed in-house with 5 μm C 18 RP particles (New Objective, Woburn, MA, USA). Solvents A and B used for the chromatographic separation were 5% acetonitrile in 0.1% formic acid and 95% acetonitrile in 0.1% formic acid, respectively. Samples were resolved at a rate of 0.3 μl/min using a gradient of 2% B for 0-10 min, 2-40% B from 10 to 50 min, 40-98% B from 50 to 60 min, 2% B from 60 to 65 min and 2% B from 65 to 120 min. Each HDL-containing fraction from an individual serum sample was injected three times as technical replicates to maximize protein identifications. Analysis of the serum proteome by nano-HPLC-MS/MS Serum proteins from individual samples were precipitated, dissolved in Tris buffer, and quantified. 200 μg of proteins were reduced with TCEP, alkylated using iodoacetic acid and digested using LysC-trypsin enzyme mixture (Promega). Peptides were separated on a 50 cm C18 column attached to Dionex Ultimate 3000 nano-UPLC system coupled to Q-Exactive HF hybrid Quadrupole-Orbitrap Mass Spectrometer (Thermo Scientific, Rockford, IL, USA). Good chromatographic separation was observed with a linear gradient consisting of mobile phases A (water with 0.1% formic acid) and B (acetonitrile with 0.1% formic acid) where the gradient was from 5% B at 0 min to 40% B at 80 min. MS spectra were acquired by data dependent scans consisting of MS/MS scans of the twenty most intense ions from the full MS scan with dynamic exclusion option which was 10 s. Data analysis All 45 data files (3 technical MS replicates per sample) were searched against the human Uniprot canonical and isoform database (release 2016_03) using MaxQuant ver. 1.5.3 [14]. Proteins were identified with 1% protein false discovery rate (FDR), determined empirically by reversed decoy database searching according to standard MS analysis approaches. Cysteine carbamidomethylation (+ 57.021) was considered as fixed modification, and oxidation of methionine (+ 15.995) and N-terminal acetylation (+ 42.010) were considered as variable modifications. The main search peptide tolerance was kept at 4.5 ppm and the minimum intensity threshold was kept at 500. The MS/MS match tolerance was kept at 0.5 Da. The remaining settings in MaxQuant were kept at default. The 'match between runs' feature was activated. Label free normalization and quantitation was performed using the LFQ feature of MaxQuant (MaxLFQ) [15]. The minimum number of neighbors was kept at three and average number of neighbors was 6 for LFQ. Data cleaning and statistical analysis was performed using Perseus ver. 1.5.3 [16]. Proteins identified as decoy or contaminants were manually removed. The LFQ values were log-transformed and filtered, with a minimum of 66% of values present for each sample. Missing values were replaced using values computed from the normal distribution with a width of 0.3 and a downshift of 1.8. The minimum number of peptides per identified protein after data clean-up was two peptides per protein. Analysis of the serum proteome data was carried out in Proteome Discoverer. Spectra were searched using Sequest HT algorithm within the Proteome Discoverer v2.1 (Thermo Scientific) in combination with the human UniProt protein FASTA database (20,193 entries, December 2015). Search parameters were as follows: FT-trap instrument, parent mass error tolerance of 10 ppm, fragment mass error tolerance of 0.02 Da (monoisotopic), variable modifications of 16 Da (oxidation) on methionine and fixed modification of 58 Da (carboxymethylation) on cysteine. Peptide spectral match (PSM) numbers for each identified peptide were scaled using total PSM in a particular sample and normalized via z-score normalization. Welch's t test was used on protein entries with a minimum of three valid values per group to identify proteins that differed significantly (p < 0.05) between the normal and NASH samples. We used statistical regression analysis using R ver. 3.0.3 to examine the association of traits with protein abundance. Function lm() provides the coefficient, and function anova() gives the p value of the coefficient. For gene ontology analysis related to biological process (BP) or molecular function (MF), Consensus Path DBhuman [17] was used. Results Our study focused on a group of 15 morbidly obese females for an exploratory HDL proteomics analysis, 5 with no abnormal liver histology, 5 with SS and 5 with NASH. The three groups ranged in age from 45 to 59 years. Individuals were selected by matching the age and BMI between individuals across all three pathological classes (Table 1). Of the 5 patients diagnosed with NASH, 2 patients had diabetes. All the five patients from the NASH group were also diagnosed with hypertension while only one patient with SS and three patients in the normal liver group were diagnosed with hypertension. None of the patients used sulfonylureas, statins or insulin. One patient in the NASH group indicated the use of metformin while another subject in the NASH group reported the use of thiazolidinediones (TZDs). No statistically significant differences were noted between the group with normal liver and patients with SS or NASH in age and BMI. Alanine transaminase (ALT), glucose, and triglyceride levels, and homeostatic model assessment (HOMA) showed no significant differences between the three groups. While HDL and LDL levels did not vary significantly between the three groups, the median HDL sizes showed statistically significant difference (p < 0.05) between the three groups (Table 1). HDL proteome analysis HDL-enriched fractions isolated by FPLC-SEC were combined, delipidated, proteins digested with trypsin and analyzed by MS. We have previously shown that this protocol is able to perform an in-depth analysis of the HDL-associated proteome from very small serum volumes [10]. MS analysis identified a total of 125 proteins in the 15 samples. On an average, we identified 121 proteins in samples from subjects with normal livers, 120 proteins in subjects with SS, and 122 proteins in subjects diagnosed with NASH. No proteins were uniquely identified in individuals with any one of the three liver diagnoses, suggesting that any proteomic differences in HDL particles in NAFLD patients are likely to be quantitative. Therefore, we examined the quantitative differences between proteins that were identified in samples from normal, SS and NASH patients. 95 proteins (FDR < 0.01) were identified across all samples with a minimum of two peptides and a minimum of one unique peptide, and these proteins were used for further analysis (Additional file 1: Supplementary Table). Of these, 60 proteins are annotated as HDL-associated proteins when compared against HDL proteome watch list [18]. 12 proteins showed differences in relative abundance between normal, SS and NASH patients (p ≤ 0.05). Seven of the 12 proteins were related to endopeptidase function. Alpha-2-macroglobulin (A2MG) and apolipoprotein B (APOB) were increased in SS and NASH and followed the trend NASH > SS > normal. The remaining 10 proteins were decreased in SS and NASH conditions ( Table 2). When we compared HDL protein abundances between SS and NASH, 7 proteins were nominally significantly changed ( Table 2). Six of the seven proteins belong to the immunoglobulin class and were significantly increased in SS. While all detected protein changes were nominally statistically significant, the observed changes were small. APOB and A2MG abundance in control individuals (morbidly obese females with no liver pathology) was reduced by 5 and 4%, respectively, when compared to HDL particles from morbidly obese NASH subjects. As shown in Table 2, overall, the comparison of protein abundances of HDL particles between normal and NAFLD (SS plus NASH) identified 20 proteins that were significantly altered (p ≤ 0.05). HDL proteins of NASH patients may impact anti-thrombotic functions of HDL To understand the putative functional implications of altered protein abundances in NAFLD, we performed a GO term over-representation analysis. Analysis of the biological process (BP) GO category revealed the terms 'negative regulation of endopeptidase activity' (GO0010951), 'platelet degranulation' (GO0002576), 'blood coagulation' (GO0007596), "complement activation" (GO0006956), "fibrinolysis" (GO0042730) and "positive regulation of blood coagulation" (GO0030194) to be significantly over-represented (p < 0.01, Fig. 1). Under the Molecular Function (MF) category GO terms, the only over-represented group was "endopeptidase inhibitor activity" (p < 0.01). We further examined abundances of proteins related to the thrombotic function of HDL to analyze the relation between their relative levels between normal versus NAFLD and HDL function. Three proteins implicated in the regulation of blood coagulation (antithrombin III or ANT3, plasminogen or PLMN, histidine-rich glycoprotein or HRG) were significantly decreased in SS and NASH patients compared to individuals with normal liver pathology (p ≤ 0.02, Table 2). Comparison of HDL proteome to serum protein abundance We then sought to ascertain that the differences in protein abundances are likely originating from proteins that are HDL-associated and not a reflection of the systemic abundance differences in the serum proteome between the groups that happen to co-elute in the chromatography. The serum proteome of the subjects who were not diagnosed with liver disease was compared with the proteome of subjects diagnosed with NASH. We identified 173 proteins between the two sets of samples. After filtering for proteins which were detected in at least three samples per group, we did not detect any proteins that were significantly different between the two groups. We also specifically looked at proteins that were identified in the HDL proteome analysis with anti-thrombotic functions (ANT3, PLMN and HRG). We found that these three proteins were not found to be significantly different between the two groups (Table 3). Discussion The aim of our pilot study was to identify putative differences in the HDL proteome between morbidly obese individuals with different liver pathologies, and whether these differences suggest a potential change in HDL function. Such a change in function of HDL particles suggested through proteomic analysis may help in the assessment and treatment of cardiovascular disease risk in NAFLD patients. The development of obesity and dyslipidemia is associated with increased CVD risk and is inversely correlated with HDL particle sizes [4,5]. While studies have shown specific protein and functional changes in HDL particles in diabetes, obesity, and CVD, no study to date investigated whether characteristic changes in the HDL protein composition in patients with NAFLD and NASH may suggest functional changes relevant to the increased CVD risk. In our exploratory proteomic analysis of HDL particles isolated from NAFLD patients, we identified 95 putative HDL-associated proteins. Similar proteomic analyses of HDL particles have identified on average 85-100 proteins [18]. The HDL protein cargo is a dynamic feature of HDL particles and reflects their function and physiological status. Thus, the number of protein identifications is similar to what has been reported by other groups and includes a large proportion of proteins routinely identified in HDL particles. Additional proteins identified in our analysis that are not routinely reported as HDL-associated include predominantly immunoglobulin fragments, complement-related proteins, and coagulation factors. We also identified attractin (ATRN) which is involved in inflammatory response, vitamin K dependent protein S (PROS), an anticoagulant, and corticosteroid binding globulin (CBG), a serine protease inhibitor in our HDL analysis. Since the subjects in this study are all morbidly obese, the proteins may also reflect obesity related-proteins in HDL particles. For example, CBG was identified as an inflammatory marker and stress response protein in obese patients with hypertension [19]. We recognize the concern that SEC method of HDL isolation and subsequently the identification of HDL proteome may not completely agree with gold standard methods such as density gradient ultracentrifugation, electrophoretic separation or antibody mediated methods. Our findings indicate that our enrichment procedure likely still retains some co-eluting serum protein complexes that may not be directly associated with HDL particles. However, the number of HDL-associated proteins, and their relative abundance, suggests that the analyzed serum fraction is significantly enriched for HDL particles. Lipoprotein separation by SEC has been used in other studies for the analysis of both lipids [20] and proteins [21] and has been shown to be comparable to separations obtained by ultracentrifugation [22]. The proteins that we observe to be significantly different between normal and NAFLD subjects are known constituents of HDL particles based on previous studies. HDL isolation by SEC may identify unique proteins compared to density gradient ultracentrifugation and certain proteins may not overlap with traditional HDL isolation methods. It will be an interesting avenue in future studies to compare the HDL proteome affected in NASH/NAFLD subjects when the HDL particles are isolated using techniques other than SEC. Our comparison of protein abundances between the three conditions identified 12 nominally significantly different proteins. Three serpin family proteins (ANT3, CBG and AACT) were significantly decreased in subjects with SS and NASH compared with subjects with normal livers. ANT3 was reported to be differentially expressed in NASH in a recent gene expression analysis of liver samples [23], matching our proteomic findings. Alpha-2-macroglobulin (A2MG) and APOB were two of the 12 proteins that increased in NASH and SS subjects. A2MG has been used as a marker for fibrogenesis in NAFLD patients (Fibrosure) [24]. Our data confirm increased levels of A2MG in NAFLD patients. Interestingly, APOB has been predominantly studied in the context of VLDL or LDL particles. Our group and others have shown that APOB can be identified as an HDL-associated protein [6,8,10]. Our study shows a positive correlation between APOB abundance in HDL particles and NAFLD severity. mRNA expression levels of APOB are significantly increased in NAFLD patients [25]. PON1 levels and activity have been reported to be reduced in NAFLD patients, and PON1 has been suggested as a biomarker for diagnosis of NASH [26]. Interestingly, in our study, PON1 showed no significant difference in abundance in NASH and SS subjects when compared to samples obtained from obese subjects with normal liver pathology. Our comparison of HDL-associated protein abundances between the three conditions only shows small fold changes. This study was conducted between subjects who have high BMI (mean = 49.5 kg/m 2 ) and the confounding variable was the presence or absence of steatosis. We believe that this comparison resulted in more subtle differences in protein abundances between the HDL proteomes, in addition to already significant alterations due to the morbid obesity of the patients. Investigation of qualitative differences between the conditions did not identify any proteins unique to any one condition, suggesting that protein differences in HDL particles between morbidly obese individuals only change quantitatively with the development of additional liver pathologies. It remains to be seen whether any of these proteomic changes can be validated in larger studies to serve as potential biomarkers for NAFLD or NASH. Apart from RCT, HDL particles also function as antiinflammatory, anti-oxidant and anti-thrombotic agents since they carry a variety of proteins mediating these functions. Gene ontology analysis in the "biological processes" (BP) and "molecular function" (MF) categories of HDL proteins significantly different between patients with normal liver pathology and NAFLD patients (SS plus NASH) indicated an over-representation of GO terms related to coagulation and endopeptidase inhibitor activity. Combining abundance data with GO term over-representation analysis indicates that the antithrombotic function of HDL proteins may be altered in NAFLD. ANT3 is the only protein which showed significantly decreased abundance between normal, SS and NASH patients as well as normal versus NAFLD patients. ANT3 is an important anti-coagulant that is activated by heparin [27]. HEP2 is also decreased in NAFLD, indicating that HDL particles from affected patients may mediate prothrombotic functions rather than serving as anti-thrombotic particles, increasing CVD risk. In this study, we decided to not correct for type-1 errors. First, the study was designed as a pilot analysis with a small sample size. Second, since the true relationship of the identified proteins is not known, a traditional stringent multiple testing correction would remove true biologically relevant proteins as false negatives from the data. We recognize that without this correction certain proteins identified may be construed as false positives in our analysis. However, gene ontology enrichment analysis of the data implicates a specific set of proteins involved in thrombotic function to be over-represented rather than a random set of proteins indicating that the proteins involved in the changes reflect a potential biological phenomenon affected in NAFLD patients. Only more stringent analyses on larger datasets will help resolve whether any of the individual proteins detected in our pilot study would be suitable as a potential biomarker for CVD risk in NASH patients, this clearly exceeded the scope and aims of this initial pilot project. None the proteins changing with NAFLD and NASH were correlated with the size of HDL particles in our patients (data not shown). Smaller HDL particles are prevalent in subjects diagnosed with morbid obesity and with NAFLD [4]. Our analysis revealed no correlation of protein abundance with HDL particle size, suggesting that the particle size reduction in NAFLD is likely mediated by changes in lipid composition. Also, the additional reduction in HDL particle size in patients with NAFLD and NASH is relatively minor, and morbid obesity alone is correlated with a much larger reduction in HDL particle size. Therefore, the effect of altered protein abundances on particle size may not be detectable in our study. Our analysis of the serum proteome from the same samples strongly suggests that these observed differences are not merely a reflection of differences existing in the serum. In fact, our analysis shows that none of the proteins identified in the serum are significantly different in abundance between samples from individuals with normal liver function and samples from individuals with NAFLD or NASH. Specifically, proteins that show functional roles related to blood coagulation or anti-thrombotic functions were not found to be significantly different in the serum proteome between normal and NASH subjects ( Table 3). This proves that the observed differences are likely related to the HDL particles enriched in the SEC fraction analyzed, and not due to co-eluting serum proteins. To comprehensively understand and delineate the mechanisms associated with altered HDL proteome and cardioprotective function, further validation of changes in the proteome through orthogonal techniques will be essential to establish the magnitude and statistical relevance of the observed changes. Considering the modest abundance changes we observed in this study, larger cohorts will have to be investigated to confirm or refute the suggestive changes in anti-thrombotic proteins, and their impact on CVD risk in NASH. However, the pilot data presented here, for the first time, provide a putative functional link between HDL particle composition and CVD risk that warrants further study. Conclusions The goal of our proteomic analysis was to characterize differences between proteins associated with HDL particles in morbidly obese patients who have been diagnosed with SS or NASH or have normal liver physiology. We identified several HDL-associated proteins that are significantly changed in NAFLD (SS plus NASH). The abundances of these proteins change with disease severity. Quantitative analysis of altered proteins revealed potential changes in HDL function which may reduce anti-thrombotic properties of HDL particles, thereby increasing CVD risk. Currently, there is no reliable analytical method to measure anti-thrombotic properties of isolated HDL particles, thus the proposed functional impact cannot be validated at this time [3]. Further studies will be needed to validate these initial findings, and verify and assess the potential clinical impact of these proteomic changes in patients diagnosed with morbid obesity and NAFLD. Risk towards CVD development in such patients may be exacerbated due to potential altered HDL function.
2018-03-07T02:41:52.505Z
2018-03-06T00:00:00.000
{ "year": 2018, "sha1": "10efe2ecf41a1bddacd706e614e9defbb52a6278", "oa_license": "CCBY", "oa_url": "https://clinicalproteomicsjournal.biomedcentral.com/track/pdf/10.1186/s12014-018-9186-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60cf2829fe675ea908047dda6dd915ce7a0ac574", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }